Sample records for quadrature rules based

  1. Quadrature rules with multiple nodes for evaluating integrals with strong singularities

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.; Spalevic, Miodrag M.

    2006-05-01

    We present a method based on the Chakalov-Popoviciu quadrature formula of Lobatto type, a rather general case of quadrature with multiple nodes, for approximating integrals defined by Cauchy principal values or by Hadamard finite parts. As a starting point we use the results obtained by L. Gori and E. Santi (cf. On the evaluation of Hilbert transforms by means of a particular class of Turan quadrature rules, Numer. Algorithms 10 (1995), 27-39; Quadrature rules based on s-orthogonal polynomials for evaluating integrals with strong singularities, Oberwolfach Proceedings: Applications and Computation of Orthogonal Polynomials, ISNM 131, Birkhauser, Basel, 1999, pp. 109-119). We generalize their results by using some of our numerical procedures for stable calculation of the quadrature formula with multiple nodes of Gaussian type and proposed methods for estimating the remainder term in such type of quadrature formulae. Numerical examples, illustrations and comparisons are also shown.

  2. The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2011-01-01

    We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…

  3. Thin-plate spline quadrature of geodetic integrals

    NASA Technical Reports Server (NTRS)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  4. The generation of arbitrary order, non-classical, Gauss-type quadrature for transport applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spence, Peter J., E-mail: peter.spence@awe.co.uk

    A method is presented, based upon the Stieltjes method (1884), for the determination of non-classical Gauss-type quadrature rules, and the associated sets of abscissae and weights. The method is then used to generate a number of quadrature sets, to arbitrary order, which are primarily aimed at deterministic transport calculations. The quadrature rules and sets detailed include arbitrary order reproductions of those presented by Abu-Shumays in [4,8] (known as the QR sets, but labelled QRA here), in addition to a number of new rules and associated sets; these are generated in a similar way, and we label them the QRS quadraturemore » sets. The method presented here shifts the inherent difficulty (encountered by Abu-Shumays) associated with solving the non-linear moment equations, particular to the required quadrature rule, to one of the determination of non-classical weight functions and the subsequent calculation of various associated inner products. Once a quadrature rule has been written in a standard form, with an associated weight function having been identified, the calculation of the required inner products is achieved using specific variable transformations, in addition to the use of rapid, highly accurate quadrature suited to this purpose. The associated non-classical Gauss quadrature sets can then be determined, and this can be done to any order very rapidly. In this paper, instead of listing weights and abscissae for the different quadrature sets detailed (of which there are a number), the MATLAB code written to generate them is included as Appendix D. The accuracy and efficacy (in a transport setting) of the quadrature sets presented is not tested in this paper (although the accuracy of the QRA quadrature sets has been studied in [12,13]), but comparisons to tabulated results listed in [8] are made. When comparisons are made with one of the azimuthal QRA sets detailed in [8], the inherent difficulty in the method of generation, used there, becomes

  5. Analog quadrature signal to phase angle data conversion by a quadrature digitizer and quadrature counter

    DOEpatents

    Buchenauer, C.J.

    1981-09-23

    The quadrature phase angle phi (t) of a pair of quadrature signals S/sub 1/(t) and S/sub 2/(t) is digitally encoded on a real time basis by a quadrature digitizer for fractional phi (t) rotational excursions and by a quadrature up/down counter for full phi (t) rotations. The pair of quadrature signals are of the form S/sub 1/(t) = k(t) sin phi (t) and S/sub 2/(t) = k(t) cos phi (t) where k(t) is a signal common to both. The quadrature digitizer and the quadrature up/down counter may be used together or singularly as desired or required. Optionally, a digital-to-analog converter may follow the outputs of the quadrature digitizer and the quadrature up/down counter to provide an analog signal output of the quadrature phase angle phi (t).

  6. Analog quadrature signal to phase angle data conversion by a quadrature digitizer and quadrature counter

    DOEpatents

    Buchenauer, C. Jerald

    1984-01-01

    The quadrature phase angle .phi.(t) of a pair of quadrature signals S.sub.1 (t) and S.sub.2 (t) is digitally encoded on a real time basis by a quadrature digitizer for fractional .phi.(t) rotational excursions and by a quadrature up/down counter for full .phi.(t) rotations. The pair of quadrature signals are of the form S.sub.1 (t)=k(t) sin .phi.(t) and S.sub.2 (t)=k(t) cos .phi.(t) where k(t) is a signal common to both. The quadrature digitizer and the quadrature up/down counter may be used together or singularly as desired or required. Optionally, a digital-to-analog converter may follow the outputs of the quadrature digitizer and the quadrature up/down counter to provide an analog signal output of the quadrature phase angle .phi.(t).

  7. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    NASA Astrophysics Data System (ADS)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  8. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  9. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  10. Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method

    NASA Astrophysics Data System (ADS)

    Adam, Gh.; Adam, S.

    2001-04-01

    The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.

  11. Discrete Ordinate Quadrature Selection for Reactor-based Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J; Evans, Thomas M; Davidson, Gregory G

    2013-01-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work.« less

  12. Discrete ordinate quadrature selection for reactor-based Eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, J. J.; Evans, T. M.; Davidson, G. G.

    2013-07-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work. (authors)« less

  13. Best quadrature formula on Sobolev class with Chebyshev weight

    NASA Astrophysics Data System (ADS)

    Xie, Congcong

    2008-05-01

    Using best interpolation function based on a given function information, we present a best quadrature rule of function on Sobolev class KWr[-1,1] with Chebyshev weight. The given function information means that the values of a function f[set membership, variant]KWr[-1,1] and its derivatives up to r-1 order at a set of nodes x are given. Error bounds are obtained, and the method is illustrated by some examples.

  14. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    2018-03-20

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  15. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  16. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations That Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + definite integral of g(x, t)F(t,y(t))dt with limits between 0 and 1,0 less than or equal to x les than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integral equations arise, e.g., when one applied Green's function techniques to nonlinear two-point boundary value problems of the form y "(x) =f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and y(l) = y(sub l), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trepezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal rule, thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  17. General implementation of arbitrary nonlinear quadrature phase gates

    NASA Astrophysics Data System (ADS)

    Marek, Petr; Filip, Radim; Ogawa, Hisashi; Sakaguchi, Atsushi; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We propose general methodology of deterministic single-mode quantum interaction nonlinearly modifying single quadrature variable of a continuous-variable system. The methodology is based on linear coupling of the system to ancillary systems subsequently measured by quadrature detectors. The nonlinear interaction is obtained by using the data from the quadrature detection for dynamical manipulation of the coupling parameters. This measurement-induced methodology enables direct realization of arbitrary nonlinear quadrature interactions without the need to construct them from the lowest-order gates. Such nonlinear interactions are crucial for more practical and efficient manipulation of continuous quadrature variables as well as qubits encoded in continuous-variable systems.

  18. Quadrature-quadrature phase-shift keying

    NASA Astrophysics Data System (ADS)

    Saha, Debabrata; Birdsall, Theodore G.

    1989-05-01

    Quadrature-quadrature phase-shift keying (Q2PSK) is a spectrally efficient modulation scheme which utilizes available signal space dimensions in a more efficient way than two-dimensional schemes such as QPSK and MSK (minimum-shift keying). It uses two data shaping pulses and two carriers, which are pairwise quadrature in phase, to create a four-dimensional signal space and increases the transmission rate by a factor of two over QPSK and MSK. However, the bit error rate performance depends on the choice of pulse pair. With simple sinusoidal and cosinusoidal data pulses, the Eb/N0 requirement for Pb(E) = 10 to the -5 is approximately 1.6 dB higher than that of MSK. Without additional constraints, Q2PSK does not maintain constant envelope. However, a simple block coding provides a constant envelope. This coded signal substantially outperforms MSKS and TFM (time-frequency multiplexing) in bandwidth efficiency. Like MSK, Q2PSK also has self-clocking and self-synchronizing ability. An optimum class of pulse shapes for use in Q2PSK-format is presented. One suboptimum realization achieves the Nyquist rate of 2 bits/s/Hz using binary detection.

  19. Quadrature demultiplexing using a degenerate vector parametric amplifier.

    PubMed

    Lorences-Riesgo, Abel; Liu, Lan; Olsson, Samuel L I; Malik, Rohit; Kumpera, Aleš; Lundström, Carl; Radic, Stojan; Karlsson, Magnus; Andrekson, Peter A

    2014-12-01

    We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phase-shift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10(-9). The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition.

  20. Method of mechanical quadratures for solving singular integral equations of various types

    NASA Astrophysics Data System (ADS)

    Sahakyan, A. V.; Amirjanyan, H. A.

    2018-04-01

    The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.

  1. Automatic quadrature control and measuring system. [using optical coupling circuitry

    NASA Technical Reports Server (NTRS)

    Hamlet, J. F. (Inventor)

    1974-01-01

    A quadrature component cancellation and measuring system comprising a detection system for detecting the quadrature component from a primary signal, including reference circuitry to define the phase of the quadrature component for detection is described. A Raysistor optical coupling control device connects an output from the detection system to a circuit driven by a signal based upon the primary signal. Combining circuitry connects the primary signal and the circuit controlled by the Raysistor device to subtract quadrature components. A known current through the optically sensitive element produces a signal defining the magnitude of the quadrature component.

  2. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations that Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + integral(0 to 1) g(x,t) F(t, y(t)) dt, 0 less than or equal to x less than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integrals equations arise, e.g., when one applies Green's function techniques to nonlinear two-point boundary value problems of the form U''(x) = f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and g(l) = y(sub 1), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trapezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  3. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  4. A quadrature based method of moments for nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin L.; Vedula, Prakash

    2011-09-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities and occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, fermions and bosons can be challenging to solve numerically. To address some underlying challenges, we propose the application of the direct quadrature based method of moments (DQMOM) for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations (NLFPEs). In DQMOM, probability density (or other distribution) functions are represented using a finite collection of Dirac delta functions, characterized by quadrature weights and locations (or abscissas) that are determined based on constraints due to evolution of generalized moments. Three particular examples of nonlinear Fokker-Planck equations considered in this paper include descriptions of: (i) the Shimizu-Yamada model, (ii) the Desai-Zwanzig model (both of which have been developed as models of muscular contraction) and (iii) fermions and bosons. Results based on DQMOM, for the transient and stationary solutions of the nonlinear Fokker-Planck equations, have been found to be in good agreement with other available analytical and numerical approaches. It is also shown that approximate reconstruction of the underlying probability density function from moments obtained from DQMOM can be satisfactorily achieved using a maximum entropy method.

  5. Differentially coherent quadrature-quadrature phase shift keying (Q2PSK)

    NASA Astrophysics Data System (ADS)

    Saha, Debabrata; El-Ghandour, Osama

    The quadrature-quadrature phase-shift-keying (Q2PSK) signaling scheme uses the vertices of a hypercube of dimension four. A generalized Q2PSK signaling format for differentially coherent detection at the receiver is considered. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. The symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/Nb. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK.

  6. Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Liao, Q.; Tchelepi, H.; Zhang, D.

    2015-12-01

    Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.

  7. Directional dual-tree complex wavelet packet transforms for processing quadrature signals.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2016-03-01

    Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals.

  8. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    PubMed

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Gaussian quadrature for multiple orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Coussement, Jonathan; van Assche, Walter

    2005-06-01

    We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.

  10. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  11. Quadrature, Interpolation and Observability

    NASA Technical Reports Server (NTRS)

    Hodges, Lucille McDaniel

    1997-01-01

    Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.

  12. Automatic quadrature control and measuring system

    NASA Technical Reports Server (NTRS)

    Hamlet, J. F.

    1973-01-01

    Quadrature is separated from amplified signal by use of phase detector, with phase shifter providing appropriate reference. Output of phase detector is further amplified and filtered by dc amplifier. Output of dc amplifier provides signal to neutralize quadrature component of transducer signal.

  13. Comparison of two Galerkin quadrature methods

    DOE PAGES

    Morel, Jim E.; Warsa, James; Franke, Brian C.; ...

    2017-02-21

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  14. Comparison of two Galerkin quadrature methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, Jim E.; Warsa, James; Franke, Brian C.

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  15. Summation rules for a fully nonlocal energy-based quasicontinuum method

    NASA Astrophysics Data System (ADS)

    Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.

    2015-09-01

    The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with

  16. Coherent detection of frequency-hopped quadrature modulations in the presence of jamming. II - QPR Class I modulation. [Quadrature Partial Response

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1981-01-01

    This paper considers the performance of quadrature partial response (QPR) in the presence of jamming. Although a QPR system employs a single sample detector in its receiver, while quadrature amplitude shift keying (or quadrature phase shift keying) requires a matched-filter type of receiver, it is shown that the coherent detection performances of the two in the presence of the intentional jammer have definite similarities.

  17. Composite Gauss-Legendre Quadrature with Error Control

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  18. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    PubMed Central

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  19. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    PubMed

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  20. Past and Future SOHO-Ulysses Quadratures

    NASA Technical Reports Server (NTRS)

    Suess, Steven; Poletto, G.

    2006-01-01

    With the launch of SOHO, it again became possible to carry out quadrature observations. In comparison with earlier observations, the new capabilities of coronal spectroscopy with UVCS and in situ ionization state and composition with Ulysses/SWICS enabled new types of studies. Results from two studies serve as examples: (i) The acceleration profile of wind from small coronal holes. (ii) A high-coronal reconnecting current sheet as the source of high ionization state Fe in a CME at Ulysses. Generally quadrature observations last only for a few days, when Ulysses is within ca. 5 degrees of the limb. This means luck is required for the phenomenon of interest to lie along the radial direction to Ulysses. However, when Ulysses is at high southern latitude in winter 2007 and high northern latitude in winter 2008, there will be unusually favorable configurations for quadrature observations with SOHO and corresponding bracketing limb observations from STEREO A/B. Specifically, Ulysses will be within 5 degrees of the limb from December 2006 to May 2007 and within 10 degrees of the limb from December 2007 to May 2008. These long-lasting quadratures and bracketing STEREO A/B observations overcome the limitations inherent in the short observation intervals of typical quadratures. Furthermore, ionization and charge state measurements like those on Ulysses will also be made on STEREO and these will be essential for identification of CME ejecta - one of the prime objectives for STEREO.

  1. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    NASA Astrophysics Data System (ADS)

    Shoupeng, Song; Zhou, Jiang

    2017-03-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.

  2. Effective potentials in nonlinear polycrystals and quadrature formulae

    NASA Astrophysics Data System (ADS)

    Michel, Jean-Claude; Suquet, Pierre

    2017-08-01

    This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471, 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.

  3. Effective potentials in nonlinear polycrystals and quadrature formulae.

    PubMed

    Michel, Jean-Claude; Suquet, Pierre

    2017-08-01

    This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471 , 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.

  4. Quadrature mixture LO suppression via DSW DAC noise dither

    DOEpatents

    Dubbert, Dale F [Cedar Crest, NM; Dudley, Peter A [Albuquerque, NM

    2007-08-21

    A Quadrature Error Corrected Digital Waveform Synthesizer (QECDWS) employs frequency dependent phase error corrections to, in effect, pre-distort the phase characteristic of the chirp to compensate for the frequency dependent phase nonlinearity of the RF and microwave subsystem. In addition, the QECDWS can employ frequency dependent correction vectors to the quadrature amplitude and phase of the synthesized output. The quadrature corrections cancel the radars' quadrature upconverter (mixer) errors to null the unwanted spectral image. A result is the direct generation of an RF waveform, which has a theoretical chirp bandwidth equal to the QECDWS clock frequency (1 to 1.2 GHz) with the high Spurious Free Dynamic Range (SFDR) necessary for high dynamic range radar systems such as SAR. To correct for the problematic upconverter local oscillator (LO) leakage, precision DC offsets can be applied over the chirped pulse using a pseudo-random noise dither. The present dither technique can effectively produce a quadrature DC bias which has the precision required to adequately suppress the LO leakage. A calibration technique can be employed to calculate both the quadrature correction vectors and the LO-nulling DC offsets using the radar built-in test capability.

  5. A MIMO radar quadrature and multi-channel amplitude-phase error combined correction method based on cross-correlation

    NASA Astrophysics Data System (ADS)

    Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan

    2018-04-01

    Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.

  6. Quantitative phase imaging using grating-based quadrature phase interferometer

    NASA Astrophysics Data System (ADS)

    Wu, Jigang; Yaqoob, Zahid; Heng, Xin; Cui, Xiquan; Yang, Changhuei

    2007-02-01

    In this paper, we report the use of holographic gratings, which act as the free-space equivalent of the 3x3 fiber-optic coupler, to perform full field phase imaging. By recording two harmonically-related gratings in the same holographic plate, we are able to obtain nontrivial phase shift between different output ports of the gratings-based Mach-Zehnder interferometer. The phase difference can be adjusted by changing the relative phase of the recording beams when recording the hologram. We have built a Mach-Zehnder interferometer using harmonically-related holographic gratings with 600 and 1200 lines/mm spacing. Two CCD cameras at the output ports of the gratings-based Mach-Zehnder interferometer are used to record the full-field quadrature interferograms, which are subsequently processed to reconstruct the phase image. The imaging system has ~12X magnification with ~420μmx315μm field-of-view. To demonstrate the capability of our system, we have successfully performed phase imaging of a pure phase object and a paramecium caudatum.

  7. General n-dimensional quadrature transform and its application to interferogram demodulation.

    PubMed

    Servin, Manuel; Quiroga, Juan Antonio; Marroquin, Jose Luis

    2003-05-01

    Quadrature operators are useful for obtaining the modulating phase phi in interferometry and temporal signals in electrical communications. In carrier-frequency interferometry and electrical communications, one uses the Hilbert transform to obtain the quadrature of the signal. In these cases the Hilbert transform gives the desired quadrature because the modulating phase is monotonically increasing. We propose an n-dimensional quadrature operator that transforms cos(phi) into -sin(phi) regardless of the frequency spectrum of the signal. With the quadrature of the phase-modulated signal, one can easily calculate the value of phi over all the domain of interest. Our quadrature operator is composed of two n-dimensional vector fields: One is related to the gradient of the image normalized with respect to local frequency magnitude, and the other is related to the sign of the local frequency of the signal. The inner product of these two vector fields gives us the desired quadrature signal. This quadrature operator is derived in the image space by use of differential vector calculus and in the frequency domain by use of a n-dimensional generalization of the Hilbert transform. A robust numerical algorithm is given to find the modulating phase of two-dimensional single-image closed-fringe interferograms by use of the ideas put forward.

  8. Spectral Quadrature method for accurate O ( N ) electronic structure calculations of metals and insulators

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-02

    We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less

  9. From Lobatto Quadrature to the Euler Constant "e"

    ERIC Educational Resources Information Center

    Khattri, Sanjay Kumar

    2010-01-01

    Based on the Lobatto quadrature, we develop several new closed form approximations to the mathematical constant "e." For validating effectiveness of our approximations, a comparison of our results to the existing approximations is also presented. Another objective of our work is to inspire students to formulate other better approximations by using…

  10. Reissner-Mindlin Legendre Spectral Finite Elements with Mixed Reduced Quadrature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, K. D.; Sprague, M. A.

    2012-10-01

    Legendre spectral finite elements (LSFEs) are examined through numerical experiments for static and dynamic Reissner-Mindlin plate bending and a mixed-quadrature scheme is proposed. LSFEs are high-order Lagrangian-interpolant finite elements with nodes located at the Gauss-Lobatto-Legendre quadrature points. Solutions on unstructured meshes are examined in terms of accuracy as a function of the number of model nodes and total operations. While nodal-quadrature LSFEs have been shown elsewhere to be free of shear locking on structured grids, locking is demonstrated here on unstructured grids. LSFEs with mixed quadrature are, however, locking free and are significantly more accurate than low-order finite-elements for amore » given model size or total computation time.« less

  11. The Fall 2000 and Fall 2001 SOHO-Ulysses Quadratures

    NASA Technical Reports Server (NTRS)

    Suess, S. T.; Poletto, G.; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    SOHO-Ulysses quadrature occurs when the SOHO-Sun-Ulysses included angle is 90 degrees. It is only at such times that the same plasma leaving the Sun in the direction of Ulysses can first be remotely analyzed with SOHO instruments and then later be sampled in situ by Ulysses instruments. The quadratures in December 2000 and 2001 are of special significance because Ulysses will be near the south and north heliographic poles, respectively, and the solar cycle will be near sunspot maximum. Quadrature geometry is sometimes confusing and observations are influenced by solar rotation. The Fall 2000 and 2001 quadratures are more complex than usual because Ulysses is not in a true polar orbit and the orbital speed of Ulysses about the Sun is becoming comparable to the speed of SOHO about the Sun. In 2000 Ulysses will always be slightly behind the pole but will appear to hang over the pole for over two months because it is moving around the Sun in the same direction as SOHO. In 2001 Ulysses will be slightly in front of the pole so that its footpoint will be directly observable. Detailed plots will be shown of the relative positions of SOHO and Ulysses will their relative positions. In neither case is true quadrature actually achieved, but this works to the observers advantage in 2001.

  12. The Fall 2000 and Fall 2001 SOHO-Ulysses Quadratures

    NASA Technical Reports Server (NTRS)

    Suess, S. T.; Poletto, G.

    2000-01-01

    SOHO-Ulysses quadrature occurs when the SOHO-Sun-Ulysses included angle is 90 degrees. It is only at such times that the same plasma leaving the Sun in the direction of Ulysses can first be remotely analyzed with SOHO instruments and then later be sampled in situ by Ulysses instruments. The quadratures in December 2000 and 2001 are of special significance because Ulysses will be near the south and north heliographic poles, respectively, and the solar cycle will be near sunspot maximum. Quadrature geometry is sometimes confusing and observations are influenced by solar rotation. The Fall 2000 and 2001 quadratures are more complex than usual because Ulysses is not in a true polar orbit and the orbital speed of Ulysses about the Sun is becoming comparable to the speed of SOHO about the Sun. In 2000 Ulysses will always be slightly behind the pole but will appear to hang over the pole for over two months because it is moving around the Sun in the same direction as SOHO. In 20001, Ulysses will be slightly in front of the pole so that its footpoint will be directly observable. Detailed plots will be shown of the relative positions of SOHO and Ulysses will their relative positions. In neither case is true quadrature actually achieved, but this works to the observers advantage in 2001.

  13. On a quadrature formula of Gori and Micchelli

    NASA Astrophysics Data System (ADS)

    Yang, Shijun

    2005-04-01

    Sparked by Bojanov (J. Comput. Appl. Math. 70 (1996) 349), we provide an alternate approach to quadrature formulas based on the zeros of the Chebyshev polynomial of the first kind for any weight function w introduced and studied in Gori and Micchelli (Math. Comp. 65 (1996) 1567), thereby improving on their observations. Upon expansion of the divided differences, we obtain explicit expressions for the corresponding Cotes coefficients in Gauss-Turan quadrature formulas for and I(fTn;w) for a Gori-Micchelli weight function. It is also interesting to mention what has been neglected for about 30 years by the literature is that, as a consequence of expansion of the divided differences in the special case when , the solution of the famous Turan's Problem 26 raised in 1980 was in fact implied by a result of Micchelli and Rivlin (IBM J. Res. Develop. 16 (1972) 372) in 1972. Some concluding comments are made in the final section.

  14. Dynamical error bounds for continuum discretisation via Gauss quadrature rules—A Lieb-Robinson bound approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, M. P.; Centre for Quantum Technologies, National University of Singapore; QuTech, Delft University of Technology, Lorentzweg 1, 2611 CJ Delft

    2016-02-15

    Instances of discrete quantum systems coupled to a continuum of oscillators are ubiquitous in physics. Often the continua are approximated by a discrete set of modes. We derive error bounds on expectation values of system observables that have been time evolved under such discretised Hamiltonians. These bounds take on the form of a function of time and the number of discrete modes, where the discrete modes are chosen according to Gauss quadrature rules. The derivation makes use of tools from the field of Lieb-Robinson bounds and the theory of orthonormal polynomials.

  15. Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.

    2001-01-01

    Quadrature formulas with multiple nodes, power orthogonality, and some applications of such quadratures to moment-preserving approximation by defective splines are considered. An account on power orthogonality (s- and [sigma]-orthogonal polynomials) and generalized Gaussian quadratures with multiple nodes, including stable algorithms for numerical construction of the corresponding polynomials and Cotes numbers, are given. In particular, the important case of Chebyshev weight is analyzed. Finally, some applications in moment-preserving approximation of functions by defective splines are discussed.

  16. Noncritical quadrature squeezing through spontaneous polarization symmetry breaking.

    PubMed

    Garcia-Ferrer, Ferran V; Navarrete-Benlloch, Carlos; de Valcárcel, Germán J; Roldán, Eugenio

    2010-07-01

    We discuss the possibility of generating noncritical quadrature squeezing by spontaneous polarization symmetry breaking. We first consider Type II frequency-degenerate optical parametric oscillators but discard them for a number of reasons. Then we propose a four-wave-mixing cavity, in which the polarization of the output mode is always linear but has an arbitrary orientation. We show that in such a cavity, complete noise suppression in a quadrature of the output field occurs, irrespective of the parameter values.

  17. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    NASA Astrophysics Data System (ADS)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  18. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  19. Multidimensional Hermite-Gaussian quadrature formulae and their application to nonlinear estimation

    NASA Technical Reports Server (NTRS)

    Mcreynolds, S. R.

    1975-01-01

    A simplified technique is proposed for calculating multidimensional Hermite-Gaussian quadratures that involves taking the square root of a matrix by the Cholesky algorithm rather than computation of the eigenvectors of the matrix. Ways of reducing the dimension, number, and order of the quadratures are set forth. If the function f(x) under the integral sign is not well approximated by a low-order algebraic expression, the order of the quadrature may be reduced by factoring f(x) into an expression that is nearly algebraic and one that is Gaussian.

  20. The use of rational functions in numerical quadrature

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter

    2001-08-01

    Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.

  1. Offset quadrature communications with decision-feedback carrier synchronization

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Smith, J. G.

    1974-01-01

    In order to accommodate a quadrature amplitude-shift-keyed (QASK) signal, Simon and Smith (1974) have modified the decision-feedback loop which tracks a quadrature phase-shift-keyed (QPSK). In the investigation reported approaches are considered to modify the loops in such a way that offset QASK signals can be tracked, giving attention to the special case of an offset QPSK. The development of the stochastic integro-differential equation of operation for a decision-feedback offset QASK loop is discussed along with the probability density function of the phase error process.

  2. RuleMonkey: software for stochastic simulation of rule-based models

    PubMed Central

    2010-01-01

    Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of

  3. Thin-thick quadrature frequency conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eimerl, D.

    1985-02-07

    The quadrature conversion scheme is a method of generating the second harmonic. The scheme, which uses two crystals in series, has several advantages over single-crystal or other two crystal schemes. The most important is that it is capable of high conversion efficiency over a large dynamic range of drive intensity and detuning angle.

  4. SQDFT: Spectral Quadrature method for large-scale parallel O ( N ) Kohn–Sham calculations at high temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less

  5. SQDFT: Spectral Quadrature method for large-scale parallel O ( N ) Kohn–Sham calculations at high temperature

    DOE PAGES

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; ...

    2017-12-07

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less

  6. Saturation dependence of the quadrature conductivity of oil-bearing sands

    NASA Astrophysics Data System (ADS)

    Schmutz, M.; Blondel, A.; Revil, A.

    2012-02-01

    We have investigated the complex conductivity of oil-bearing sands with six distinct oil types including sunflower oil, silicone oil, gum rosin, paraffin, engine oil, and an industrial oil of complex composition. In all these experiments, the oil was the non-wetting phase. The in-phase (real) conductivity follows a power law relationship with the saturation (also known as the second Archie's law) but with a saturation exponent n raging from 1.1 to 3.1. In most experiments, the quadrature conductivity follows also a power law relationship with the water saturation but with a power law exponent p can be either positive or negative. For some samples, the quadrature conductivity first increases with saturation and then decreases indicating that two processes compete in controlling the quadrature conductivity. One is related to the insulating nature of the oil phase and a second could be associated with the surface area of the oil / water interface. The quadrature conductivity seems to be influenced not only by the value of the saturation exponent n (according to the Vinegar and Waxman model, p = n - 1), but also by the surface area between the oil phase and the water phase especially for very water-repellent oil having a fractal oil-water interface.

  7. Design and application of quadrature compensation patterns in bulk silicon micro-gyroscopes.

    PubMed

    Ni, Yunfang; Li, Hongsheng; Huang, Libin

    2014-10-29

    This paper focuses on the detailed design issues of a peculiar quadrature reduction method named system stiffness matrix diagonalization, whose key technology is the design and application of quadrature compensation patterns. For bulk silicon micro-gyroscopes, a complete design and application case was presented. The compensation principle was described first. In the mechanical design, four types of basic structure units were presented to obtain the basic compensation function. A novel layout design was proposed to eliminate the additional disturbing static forces and torques. Parameter optimization was carried out to maximize the available compensation capability in a limited layout area. Two types of voltage loading methods were presented. Their influences on the sense mode dynamics were analyzed. The proposed design was applied on a dual-mass silicon micro-gyroscope developed in our laboratory. The theoretical compensation capability of a quadrature equivalent angular rate no more than 412 °/s was designed. In experiments, an actual quadrature equivalent angular rate of 357 °/s was compensated successfully. The actual compensation voltages were a little larger than the theoretical ones. The correctness of the design and the theoretical analyses was verified. They can be commonly used in planar linear vibratory silicon micro-gyroscopes for quadrature compensation purpose.

  8. On the Study of a Quadrature DCSK Modulation Scheme for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    Quyen, Nguyen Xuan

    The past decade has witnessed a boom of wireless communications which necessitate an increasing improvement of data rate, error-rate performance, bandwidth efficiency, and information security. In this work, we propose a quadrature (IQ) differential chaos-shift keying (DCSK) modulation scheme for the application in cognitive radio (CR), named CR-IQ-DCSK, which offers the above improvement. Chaotic signal is generated in frequency domain and then converted into time domain via an inverse Fourier transform. The real and imaginary components of the frequency-based chaotic signal are simultaneously used in in-phase and quadrature branches of an IQ modulator, where each branch conveys two bits by means of a DCSK-based modulation. Schemes and operating principle of the modulator and demodulator are proposed and described. Analytical BER performance for the proposed schemes over a typical multipath Rayleigh fading channel is derived and verified by numerical simulations. Results show that the proposed scheme outperforms DCSK, CDSK and performs better with the increment of the number of channel paths.

  9. Rule-Based Event Processing and Reaction Rules

    NASA Astrophysics Data System (ADS)

    Paschke, Adrian; Kozlenkov, Alexander

    Reaction rules and event processing technologies play a key role in making business and IT / Internet infrastructures more agile and active. While event processing is concerned with detecting events from large event clouds or streams in almost real-time, reaction rules are concerned with the invocation of actions in response to events and actionable situations. They state the conditions under which actions must be taken. In the last decades various reaction rule and event processing approaches have been developed, which for the most part have been advanced separately. In this paper we survey reaction rule approaches and rule-based event processing systems and languages.

  10. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  11. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  12. Multisite EPR oximetry from multiple quadrature harmonics.

    PubMed

    Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C

    2012-01-01

    Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Quadrature transmit array design using single-feed circularly polarized patch antenna for parallel transmission in MR imaging.

    PubMed

    Pang, Yong; Yu, Baiying; Vigneron, Daniel B; Zhang, Xiaoliang

    2014-02-01

    Quadrature coils are often desired in MR applications because they can improve MR sensitivity and also reduce excitation power. In this work, we propose, for the first time, a quadrature array design strategy for parallel transmission at 298 MHz using single-feed circularly polarized (CP) patch antenna technique. Each array element is a nearly square ring microstrip antenna and is fed at a point on the diagonal of the antenna to generate quadrature magnetic fields. Compared with conventional quadrature coils, the single-feed structure is much simple and compact, making the quadrature coil array design practical. Numerical simulations demonstrate that the decoupling between elements is better than -35 dB for all the elements and the RF fields are homogeneous with deep penetration and quadrature behavior in the area of interest. Bloch equation simulation is also performed to simulate the excitation procedure by using an 8-element quadrature planar patch array to demonstrate its feasibility in parallel transmission at the ultrahigh field of 7 Tesla.

  14. Rules based process window OPC

    NASA Astrophysics Data System (ADS)

    O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark

    2008-03-01

    As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.

  15. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    NASA Astrophysics Data System (ADS)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  16. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  17. 77 FR 52977 - Regulatory Capital Rules: Advanced Approaches Risk-Based Capital Rule; Market Risk Capital Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... Corporation 12 CFR Parts 324, 325 Regulatory Capital Rules: Advanced Approaches Risk-Based Capital Rule... 325 RIN 3064-AD97 Regulatory Capital Rules: Advanced Approaches Risk-Based Capital Rule; Market Risk... the agencies' current capital rules. In this NPR (Advanced Approaches and Market Risk NPR) the...

  18. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    NASA Astrophysics Data System (ADS)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  19. Low-Latitude Solar Wind During the Fall 1998 SOHO-Ulysses Quadrature

    NASA Technical Reports Server (NTRS)

    Poletto, G.; Suess, S. T.; Biesecker, D. A.; Esser, R.; Gloeckler, G.; Ko, Y.-K.; Zurbuchen, T. H.

    2002-01-01

    Solar and Heliospheric Observatory (SOH0)-Ulysses quadratures occur when the SOHO-Sun-Ulysses-included angle is 90 deg. These offer the opportunity to directly compare properties of plasma parcels, observed by SOHO [Dorningo et al.] in the low corona, with properties of the same parcels measured, in due time, in situ, by Ulysses [ Wenzel et al]. We refer the reader to Suess et al. for an extended discussion of SOHO-Ulysses quadrature geometry. Here it suffices to recall that there are two quadratures per year, as SOHO makes its one-year revolution around the Sun. This, because SOHO is at the L1 Lagrangian point, in essentially the same place as the Earth, while Ulysses is in a near-polar -5-year solar orbit with a perihelion of 1.34 AU and aphelion of 5.4 AU.

  20. Evaluation of quadrature-phase-shift-keying signal characteristics in W-band radio-over-fiber transmission using direct in-phase/quadrature-phase conversion technique

    NASA Astrophysics Data System (ADS)

    Suzuki, Meisaku; Kanno, Atsushi; Yamamoto, Naokatsu; Sotobayashi, Hideyuki

    2016-02-01

    The effects of in-phase/quadrature-phase (IQ) imbalances are evaluated with a direct IQ down-converter in the W-band (75-110 GHz). The IQ imbalance of the converter is measured within a range of +/-10 degrees in an intermediate frequency of DC-26.5 GHz. 1-8-G-baud quadrature phase-shift keying (QPSK) signals are transmitted successfully with observed bit error rates within a forward error correction limit of 2×10-3 using radio over fiber (RoF) techniques. The direct down-conversion technique is applicable to next-generation high-speed wireless access communication systems in the millimeter-wave band.

  1. Exact Integrations of Polynomials and Symmetric Quadrature Formulas over Arbitrary Polyhedral Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1997-01-01

    This paper is concerned with two important elements in the high-order accurate spatial discretization of finite volume equations over arbitrary grids. One element is the integration of basis functions over arbitrary domains, which is used in expressing various spatial integrals in terms of discrete unknowns. The other consists of quadrature approximations to those integrals. Only polynomial basis functions applied to polyhedral and polygonal grids are treated here. Non-triangular polygonal faces are subdivided into a union of planar triangular facets, and the resulting triangulated polyhedron is subdivided into a union of tetrahedra. The straight line segment, triangle, and tetrahedron are thus the fundamental shapes that are the building blocks for all integrations and quadrature approximations. Integrals of products up to the fifth order are derived in a unified manner for the three fundamental shapes in terms of the position vectors of vertices. Results are given both in terms of tensor products and products of Cartesian coordinates. The exact polynomial integrals are used to obtain symmetric quadrature approximations of any degree of precision up to five for arbitrary integrals over the three fundamental domains. Using a coordinate-free formulation, simple and rational procedures are developed to derive virtually all quadrature formulas, including some previously unpublished. Four symmetry groups of quadrature points are introduced to derive Gauss formulas, while their limiting forms are used to derive Lobatto formulas. Representative Gauss and Lobatto formulas are tabulated. The relative efficiency of their application to polyhedral and polygonal grids is detailed. The extension to higher degrees of precision is discussed.

  2. Solar Wind Characteristics from SOHO-Sun-Ulysses Quadrature Observations

    NASA Technical Reports Server (NTRS)

    Poletto, Giannina; Suess, Steve T.; Six, N. Frank (Technical Monitor)

    2002-01-01

    Over the past few years, we have been running SOHO (Solar and Heliospheric Observatory)-Sun-Ulysses quadrature campaigns, aimed at comparing the plasma properties at coronal altitudes with plasma properties at interplanetary distances. Coronal plasma has been observed by SOHO experiments: mainly, we used LASCO (Large Angle and Spectrometric Coronagraph Experiment) data to understand the overall coronal configuration at the time of quadratures and analyzed SUMER (Solar Ultraviolet Measurements of Emitted Radiation), CDS (Coronal Diagnostic Spectrometer) and UVCS (Ultraviolet Coronagraph Spectrometer) data to derive its physical characteristics. At interplanetary distances, SWICS (Solar Wind Ion Composition Spectrometer) and SWOOPS (Solar Wind Observation over the Poles of the Sun) aboard Ulysses provided us with interplanetary plasma data. Here we report on results from some of the campaigns. We notice that, depending on the geometry of the quadrature, i.e. on whether the radial to Ulysses traverses the corona at high or low latitudes, we are able to study different kinds of solar wind. In particular, a comparison between low-latitude and high-latitude wind, allowed us to provide evidence for differences in the acceleration of polar, fast plasma and equatorial, slow plasma: the latter occurring at higher levels and through a more extended region than fast wind. These properties are shared by both the proton and heavy ions outflows. Quadrature observations may provide useful information also on coronal vs. in situ elemental composition. To this end, we analyzed spectra taken in the corona, at altitudes ranging between approx. 1.02 and 2.2 solar radii, and derived the abundances of a number of ions, including oxygen and iron. Values of the O/Fe ratio, at coronal levels, have been compared with measurements of this ratio made by SWICS at interplanetary distances. Our results are compared with previous findings and predictions from modeling efforts.

  3. Automated visualization of rule-based models

    PubMed Central

    Tapia, Jose-Juan; Faeder, James R.

    2017-01-01

    Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816

  4. Planar quadrature RF transceiver design using common-mode differential-mode (CMDM) transmission line method for 7T MR imaging.

    PubMed

    Li, Ye; Yu, Baiying; Pang, Yong; Vigneron, Daniel B; Zhang, Xiaoliang

    2013-01-01

    The use of quadrature RF magnetic fields has been demonstrated to be an efficient method to reduce transmit power and to increase the signal-to-noise (SNR) in magnetic resonance (MR) imaging. The goal of this project was to develop a new method using the common-mode and differential-mode (CMDM) technique for compact, planar, distributed-element quadrature transmit/receive resonators for MR signal excitation and detection and to investigate its performance for MR imaging, particularly, at ultrahigh magnetic fields. A prototype resonator based on CMDM method implemented by using microstrip transmission line was designed and fabricated for 7T imaging. Both the common mode (CM) and the differential mode (DM) of the resonator were tuned and matched at 298MHz independently. Numerical electromagnetic simulation was performed to verify the orthogonal B1 field direction of the two modes of the CMDM resonator. Both workbench tests and MR imaging experiments were carried out to evaluate the performance. The intrinsic decoupling between the two modes of the CMDM resonator was demonstrated by the bench test, showing a better than -36 dB transmission coefficient between the two modes at resonance frequency. The MR images acquired by using each mode and the images combined in quadrature showed that the CM and DM of the proposed resonator provided similar B1 coverage and achieved SNR improvement in the entire region of interest. The simulation and experimental results demonstrate that the proposed CMDM method with distributed-element transmission line technique is a feasible and efficient technique for planar quadrature RF coil design at ultrahigh fields, providing intrinsic decoupling between two quadrature channels and high frequency capability. Due to its simple and compact geometry and easy implementation of decoupling methods, the CMDM quadrature resonator can possibly be a good candidate for design blocks in multichannel RF coil arrays.

  5. All-optical simultaneous multichannel quadrature phase shift keying signal regeneration based on phase-sensitive amplification

    NASA Astrophysics Data System (ADS)

    Wang, Hongxiang; Wang, Qi; Bai, Lin; Ji, Yuefeng

    2018-01-01

    A scheme is proposed to realize the all-optical phase regeneration of four-channel quadrature phase shift keying (QPSK) signal based on phase-sensitive amplification. By utilizing conjugate pump and common pump in a highly nonlinear optical fiber, degenerate four-wave mixing process is observed, and QPSK signals are regenerated. The number of waves is reduced to decrease the cross talk caused by undesired nonlinear interaction during the coherent superposition process. In addition, to avoid the effect of overlapping frequency, frequency spans between pumps and signals are set to be nonintegral multiples. Optical signal-to-noise ratio improvement is validated by bit error rate measurements. Compared with single-channel regeneration, multichannel regeneration brings 0.4-dB OSNR penalty when the value of BER is 10-3, which shows the cross talk in regeneration process is negligible.

  6. Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values

    NASA Astrophysics Data System (ADS)

    López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.

    2018-02-01

    The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.

  7. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  8. On the power spectral density of quadrature modulated signals. [satellite communication

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.

    1981-01-01

    The conventional (no-offset) quadriphase modulation technique suffers from the fact that hardlimiting will restore the frequency sidelobes removed by proper filtering. Thus, offset keyed quadriphase modulation techniques are often proposed for satellite communication with bandpass hardlimiting. A unified theory is developed which is capable of describing the power spectral density before and after the hardlimiting process. Using the in-phase and the quadrature phase channel with arbitrary pulse shaping, analytical results are established for generalized quadriphase modulation. In particular MSK, OPSK or the recently introduced overlapped raised cosine keying all fall into this general category. It is shown that for a linear communication channel, the power spectral density of the modulated signal remains unchanged regardless of the offset delay. Furthermore, if the in phase and the quadrature phase channel have identical pulse shapes without offset, the spectrum after bandpass hardlimiting will be identical to that of the conventional QPSK modulation. Numerical examples are given for various modulation techniques. A case of different pulse shapes in the in phase and the quadrature phase channel is also considered.

  9. Prostate multimodality image registration based on B-splines and quadrature local energy.

    PubMed

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  10. Automated revision of CLIPS rule-bases

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.; Pazzani, Michael J.

    1994-01-01

    This paper describes CLIPS-R, a theory revision system for the revision of CLIPS rule-bases. CLIPS-R may be used for a variety of knowledge-base revision tasks, such as refining a prototype system, adapting an existing system to slightly different operating conditions, or improving an operational system that makes occasional errors. We present a description of how CLIPS-R revises rule-bases, and an evaluation of the system on three rule-bases.

  11. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  12. Using Rule-Based Computer Programming to Unify Communication Rules Research.

    ERIC Educational Resources Information Center

    Sanford, David L.; Roach, J. W.

    This paper proposes the use of a rule-based computer programming language as a standard for the expression of rules, arguing that the adoption of a standard would enable researchers to communicate about rules in a consistent and significant way. Focusing on the formal equivalence of artificial intelligence (AI) programming to different types of…

  13. Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2015-03-01

    We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.

  14. Moral empiricism and the bias for act-based rules.

    PubMed

    Ayars, Alisabeth; Nichols, Shaun

    2017-10-01

    Previous studies on rule learning show a bias in favor of act-based rules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-based rules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-based rule or 3 consequence-based rules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-based rules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-based rules can be shaped by evidence from other rules supports the idea that the bias for act-based rules might be acquired as an overhypothesis from the

  15. Algorithm 699 - A new representation of Patterson's quadrature formulae

    NASA Technical Reports Server (NTRS)

    Krogh, Fred T.; Van Snyder, W.

    1991-01-01

    A method is presented to reduce the number of coefficients necessary to represent Patterson's quadrature formulae. It also reduces the amount of storage necessary for storing function values, and produces slightly smaller error in evaluating the formulae.

  16. Notes on the boundaries of quadrature domains

    NASA Astrophysics Data System (ADS)

    Verma, Kaushal

    2018-03-01

    We highlight an intrinsic connection between classical quadrature domains and the well-studied theme of removable singularities of analytic sets in several complex variables. Exploiting this connection provides a new framework to recover several basic properties of such domains, namely the algebraicity of their boundary, a better understanding of the associated defining polynomial and the possible boundary singularities that can occur.

  17. An Application of the Quadrature-Free Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Atkins, Harold L.

    2000-01-01

    The process of generating a block-structured mesh with the smoothness required for high-accuracy schemes is still a time-consuming process often measured in weeks or months. Unstructured grids about complex geometries are more easily generated, and for this reason, methods using unstructured grids have gained favor for aerodynamic analyses. The discontinuous Galerkin (DG) method is a compact finite-element projection method that provides a practical framework for the development of a high-order method using unstructured grids. Higher-order accuracy is obtained by representing the solution as a high-degree polynomial whose time evolution is governed by a local Galerkin projection. The traditional implementation of the discontinuous Galerkin uses quadrature for the evaluation of the integral projections and is prohibitively expensive. Atkins and Shu introduced the quadrature-free formulation in which the integrals are evaluated a-priori and exactly for a similarity element. The approach has been demonstrated to possess the accuracy required for acoustics even in cases where the grid is not smooth. Other issues such as boundary conditions and the treatment of non-linear fluxes have also been studied in earlier work This paper describes the application of the quadrature-free discontinuous Galerkin method to a two-dimensional shear layer problem. First, a brief description of the method is given. Next, the problem is described and the solution is presented. Finally, the resources required to perform the calculations are given.

  18. Quadrature imposition of compatibility conditions in Chebyshev methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Streett, C. L.

    1990-01-01

    Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.

  19. Photoacoustic tomography using a Michelson interferometer with quadrature phase detection

    NASA Astrophysics Data System (ADS)

    Speirs, Rory W.; Bishop, Alexis I.

    2013-07-01

    We present a pressure sensor based on a Michelson interferometer, for use in photoacoustic tomography. Quadrature phase detection is employed allowing measurement at any point on the mirror surface without having to retune the interferometer, as is typically required by Fabry-Perot type detectors. This opens the door to rapid full surface detection, which is necessary for clinical applications. Theory relating acoustic pressure to detected acoustic particle displacements is used to calculate the detector sensitivity, which is validated with measurement. Proof-of-concept tomographic images of blood vessel phantoms have been taken with sub-millimeter resolution at depths of several millimeters.

  20. A double-quadrature radiofrequency coil design for proton-decoupled carbon-13 magnetic resonance spectroscopy in humans at 7T.

    PubMed

    Serés Roig, Eulalia; Magill, Arthur W; Donati, Guillaume; Meyerspeer, Martin; Xin, Lijing; Ipek, Ozlem; Gruetter, Rolf

    2015-02-01

    Carbon-13 magnetic resonance spectroscopy ((13) C-MRS) is challenging because of the inherent low sensitivity of (13) C detection and the need for radiofrequency transmission at the (1) H frequency while receiving the (13) C signal, the latter requiring electrical decoupling of the (13) C and (1) H radiofrequency channels. In this study, we added traps to the (13) C coil to construct a quadrature-(13) C/quadrature-(1) H surface coil, with sufficient isolation between channels to allow simultaneous operation at both frequencies without compromise in coil performance. Isolation between channels was evaluated on the bench by measuring all coupling parameters. The quadrature mode of the quadrature-(13) C coil was assessed using in vitro (23) Na gradient echo images. The signal-to-noise ratio (SNR) was measured on the glycogen and glucose resonances by (13) C-MRS in vitro, compared with that obtained with a linear-(13) C/quadrature-(1) H coil, and validated by (13) C-MRS in vivo in the human calf at 7T. Isolation between channels was better than -30 dB. The (23) Na gradient echo images indicate a region where the field is strongly circularly polarized. The quadrature coil provided an SNR enhancement over a linear coil of 1.4, in vitro and in vivo. It is feasible to construct a double-quadrature (13) C-(1) H surface coil for proton decoupled sensitivity enhanced (13) C-NMR spectroscopy in humans at 7T. © 2014 Wiley Periodicals, Inc.

  1. Archimedes Quadrature of the Parabola: A Mechanical View

    ERIC Educational Resources Information Center

    Oster, Thomas J.

    2006-01-01

    In his famous quadrature of the parabola, Archimedes found the area of the region bounded by a parabola and a chord. His method was to fill the region with infinitely many triangles each of whose area he could calculate. In his solution, he stated, without proof, three preliminary propositions about parabolas that were known in his time, but are…

  2. Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy

    PubMed Central

    Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.

    2010-01-01

    A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667

  3. Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods

    NASA Astrophysics Data System (ADS)

    Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David

    2015-11-01

    Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Digital quadrature phase detection

    DOEpatents

    Smith, James A.; Johnson, John A.

    1992-01-01

    A system for detecting the phase of a frequency of phase modulated signal that includes digital quadrature sampling of the frequency or phase modulated signal at two times that are one quarter of a cycle of a reference signal apart, determination of the arctangent of the ratio of a first sampling of the frequency or phase modulated signal to the second sampling of the frequency or phase modulated signal, and a determination of quadrant in which the phase determination is increased by 2.pi. when the quadrant changes from the first quadrant to the fourth quadrant and decreased by 2.pi. when the quadrant changes from the fourth quadrant to the first quadrant whereby the absolute phase of the frequency or phase modulated signal can be determined using an arbitrary reference convention.

  5. Digital quadrature phase detection

    DOEpatents

    Smith, J.A.; Johnson, J.A.

    1992-05-26

    A system for detecting the phase of a frequency or phase modulated signal that includes digital quadrature sampling of the frequency or phase modulated signal at two times that are one quarter of a cycle of a reference signal apart, determination of the arctangent of the ratio of a first sampling of the frequency or phase modulated signal to the second sampling of the frequency or phase modulated signal, and a determination of quadrant in which the phase determination is increased by 2[pi] when the quadrant changes from the first quadrant to the fourth quadrant and decreased by 2[pi] when the quadrant changes from the fourth quadrant to the first quadrant whereby the absolute phase of the frequency or phase modulated signal can be determined using an arbitrary reference convention. 6 figs.

  6. Rule-Based Runtime Verification

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

  7. Feasibility of heart rate variability measurement from quadrature Doppler radar using arctangent demodulation with DC offset compensation.

    PubMed

    Massagram, Wansuree; Hafner, Noah M; Park, Byung-Kwan; Lubecke, Victor M; Host-Madsen, Anders; Boric-Lubecke, Olga

    2007-01-01

    This paper describes the experimental results of the beat-to-beat interval measurement from a quadrature Doppler radar system utilizing arctangent demodulation with DC offset compensation techniques. The comparison in SDNN and in RMSDD of both signals demonstrates the potential of using quadrature Doppler radar for HRV analysis.

  8. Numerical integration of discontinuous functions: moment fitting and smart octree

    NASA Astrophysics Data System (ADS)

    Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander

    2017-11-01

    A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.

  9. A Synthetic Quadrature Phase Detector/Demodulator for Fourier Transform Transform Spectrometers

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2008-01-01

    A method is developed to demodulate (velocity correct) Fourier transform spectrometer (FTS) data that is taken with an analog to digital converter that digitizes equally spaced in time. This method makes it possible to use simple low cost, high resolution audio digitizers to record high quality data without the need for an event timer or quadrature laser hardware, and makes it possible to use a metrology laser of any wavelength. The reduced parts count and simplicity implementation makes it an attractive alternative in space based applications when compared to previous methods such as the Brault algorithm.

  10. Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.

    PubMed

    Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C

    2010-12-01

    A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. A Gaussian quadrature method for total energy analysis in electronic state calculations

    NASA Astrophysics Data System (ADS)

    Fukushima, Kimichika

    This article reports studies by Fukushima and coworkers since 1980 concerning their highly accurate numerical integral method using Gaussian quadratures to evaluate the total energy in electronic state calculations. Gauss-Legendre and Gauss-Laguerre quadratures were used for integrals in the finite and infinite regions, respectively. Our previous article showed that, for diatomic molecules such as CO and FeO, elliptic coordinates efficiently achieved high numerical integral accuracy even with a numerical basis set including transition metal atomic orbitals. This article will generalize straightforward details for multiatomic systems with direct integrals in each decomposed elliptic coordinate determined from the nuclear positions of picked-up atom pairs. Sample calculations were performed for the molecules O3 and H2O. This article will also try to present, in another coordinate, a numerical integral by partially using the Becke's decomposition published in 1988, but without the Becke's fuzzy cell generated by the polynomials of internuclear distance between the pair atoms. Instead, simple nuclear weights comprising exponential functions around nuclei are used. The one-center integral is performed with a Gaussian quadrature pack in a spherical coordinate, included in the author's original program in around 1980. As for this decomposition into one-center integrals, sample calculations are carried out for Li2.

  12. Simulation-Based Rule Generation Considering Readability

    PubMed Central

    Yahagi, H.; Shimizu, S.; Ogata, T.; Hara, T.; Ota, J.

    2015-01-01

    Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501

  13. A note on the bounds of the error of Gauss-Turan-type quadratures

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.; Spalevic, Miodrag M.

    2007-03-01

    This note is concerned with estimates for the remainder term of the Gauss-Turan quadrature formula,where is the Gori-Michelli weight function, with Un-1(t) denoting the (n-1)th degree Chebyshev polynomial of the second kind, and f is a function analytic in the interior of and continuous on the boundary of an ellipse with foci at the points +/-1 and sum of semiaxes [varrho]>1. The present paper generalizes the results in [G.V. Milovanovic, M.M. Spalevic, Bounds of the error of Gauss-Turan-type quadratures, J. Comput. Appl. Math. 178 (2005) 333-346], which is concerned with the same problem when s=1.

  14. Significance testing of rules in rule-based models of human problem solving

    NASA Technical Reports Server (NTRS)

    Lewis, C. M.; Hammer, J. M.

    1986-01-01

    Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.

  15. Compartmental and Spatial Rule-Based Modeling with Virtual Cell.

    PubMed

    Blinov, Michael L; Schaff, James C; Vasilescu, Dan; Moraru, Ion I; Bloom, Judy E; Loew, Leslie M

    2017-10-03

    In rule-based modeling, molecular interactions are systematically specified in the form of reaction rules that serve as generators of reactions. This provides a way to account for all the potential molecular complexes and interactions among multivalent or multistate molecules. Recently, we introduced rule-based modeling into the Virtual Cell (VCell) modeling framework, permitting graphical specification of rules and merger of networks generated automatically (using the BioNetGen modeling engine) with hand-specified reaction networks. VCell provides a number of ordinary differential equation and stochastic numerical solvers for single-compartment simulations of the kinetic systems derived from these networks, and agent-based network-free simulation of the rules. In this work, compartmental and spatial modeling of rule-based models has been implemented within VCell. To enable rule-based deterministic and stochastic spatial simulations and network-free agent-based compartmental simulations, the BioNetGen and NFSim engines were each modified to support compartments. In the new rule-based formalism, every reactant and product pattern and every reaction rule are assigned locations. We also introduce the rule-based concept of molecular anchors. This assures that any species that has a molecule anchored to a predefined compartment will remain in this compartment. Importantly, in addition to formulation of compartmental models, this now permits VCell users to seamlessly connect reaction networks derived from rules to explicit geometries to automatically generate a system of reaction-diffusion equations. These may then be simulated using either the VCell partial differential equations deterministic solvers or the Smoldyn stochastic simulator. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Neural Correlates of Phrase Quadrature Perception in Harmonic Rhythm: An EEG Study Using a Brain-Computer Interface.

    PubMed

    Fernández-Soto, Alicia; Martínez-Rodrigo, Arturo; Moncho-Bogani, José; Latorre, José Miguel; Fernández-Caballero, Antonio

    2018-06-01

    For the sake of establishing the neural correlates of phrase quadrature perception in harmonic rhythm, a musical experiment has been designed to induce music-evoked stimuli related to one important aspect of harmonic rhythm, namely the phrase quadrature. Brain activity is translated to action through electroencephalography (EEG) by using a brain-computer interface. The power spectral value of each EEG channel is estimated to obtain how power variance distributes as a function of frequency. The results of processing the acquired signals are in line with previous studies that use different musical parameters to induce emotions. Indeed, our experiment shows statistical differences in theta and alpha bands between the fulfillment and break of phrase quadrature, an important cue of harmonic rhythm, in two classical sonatas.

  17. Automated rule-base creation via CLIPS-Induce

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.

    1994-01-01

    Many CLIPS rule-bases contain one or more rule groups that perform classification. In this paper we describe CLIPS-Induce, an automated system for the creation of a CLIPS classification rule-base from a set of test cases. CLIPS-Induce consists of two components, a decision tree induction component and a CLIPS production extraction component. ID3, a popular decision tree induction algorithm, is used to induce a decision tree from the test cases. CLIPS production extraction is accomplished through a top-down traversal of the decision tree. Nodes of the tree are used to construct query rules, and branches of the tree are used to construct classification rules. The learned CLIPS productions may easily be incorporated into a large CLIPS system that perform tasks such as accessing a database or displaying information.

  18. Adaptive Quadrature for Item Response Models. Research Report. ETS RR-06-29

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2006-01-01

    Adaptive quadrature is applied to marginal maximum likelihood estimation for item response models with normal ability distributions. Even in one dimension, significant gains in speed and accuracy of computation may be achieved.

  19. Reconfigurable microwave photonic in-phase and quadrature detector for frequency agile radar.

    PubMed

    Emami, Hossein; Sarkhosh, Niusha

    2014-06-01

    A microwave photonic in-phase and quadrature detector is conceived and practically demonstrated. The detector has the ability to become electronically reconfigured to operate at any frequency over a wide range. This makes it an excellent candidate for frequency agile radars and other electronic warfare systems based on frequency hopping. The detector exhibits a very low amplitude and phase imbalance, which removes the need for any imbalance compensation technique. The system is designed based on the transversal filtering concept and reconfigurability is achieved via wavelength control in a dispersive fiber. The system operation was demonstrated over a frequency range of 3.5-35 GHz, with a maximum of -32 dB amplitude imbalance.

  20. A new perspective for quintic B-spline based Crank-Nicolson-differential quadrature method algorithm for numerical solutions of the nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Başhan, Ali; Uçar, Yusuf; Murat Yağmurlu, N.; Esen, Alaattin

    2018-01-01

    In the present paper, a Crank-Nicolson-differential quadrature method (CN-DQM) based on utilizing quintic B-splines as a tool has been carried out to obtain the numerical solutions for the nonlinear Schrödinger (NLS) equation. For this purpose, first of all, the Schrödinger equation has been converted into coupled real value differential equations and then they have been discretized using both the forward difference formula and the Crank-Nicolson method. After that, Rubin and Graves linearization techniques have been utilized and the differential quadrature method has been applied to obtain an algebraic equation system. Next, in order to be able to test the efficiency of the newly applied method, the error norms, L2 and L_{∞}, as well as the two lowest invariants, I1 and I2, have been computed. Besides those, the relative changes in those invariants have been presented. Finally, the newly obtained numerical results have been compared with some of those available in the literature for similar parameters. This comparison clearly indicates that the currently utilized method, namely CN-DQM, is an effective and efficient numerical scheme and allows us to propose to solve a wide range of nonlinear equations.

  1. Simulation of large-scale rule-based models

    PubMed Central

    Colvin, Joshua; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.; Von Hoff, Daniel D.; Posner, Richard G.

    2009-01-01

    Motivation: Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. Results: DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein–protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of StochSim. DYNSTOC differs from StochSim by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. Availability: DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at http://public.tgen.org/dynstoc/. Contact: dynstoc@tgen.org Supplementary information

  2. Power flow control using quadrature boosters

    NASA Astrophysics Data System (ADS)

    Sadanandan, Sandeep N.

    A power system that can be controlled within security constraints would be an advantage to power planners and real-time operators. Controlling flows can lessen reliability issues such as thermal limit violations, power stability problems, and/or voltage stability conditions. Control of flows can also mitigate market issues by reducing congestion on some lines and rerouting power to less loaded lines or onto preferable paths. In the traditional control of power flows, phase shifters are often used. More advanced methods include using Flexible AC Transmission System (FACTS) Controllers. Some examples include Thyristor Controlled Series Capacitors, Synchronous Series Static Compensators, and Unified Power Flow Controllers. Quadrature Boosters (QBs) have similar structures to phase-shifters, but allow for higher voltage magnitude during real power flow control. In comparison with other FACTS controllers QBs are not as complex and not as expensive. The present study proposes to use QBs to control power flows on a power system. With the inclusion of QBs, real power flows can be controlled to desired scheduled values. In this thesis, the linearized power flow equations used for power flow analysis were modified for the control problem. This included modifying the Jacobian matrix, the power error vector, and calculating the voltage injected by the quadrature booster for the scheduled real power flow. Two scenarios were examined using the proposed power flow control method. First, the power flow in a line in a 5-bus system was modified with a QB using the method developed in this thesis. Simulation was carried out using Matlab. Second, the method was applied to a 30-bus system and then to a 118-bus system using several QBs. In all the cases, the calculated values of the QB voltages led to desired power flows in the designated line.

  3. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  4. Maximum of the modulus of kernels in Gauss-Turan quadratures

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.; Spalevic, Miodrag M.; Pranic, Miroslav S.

    2008-06-01

    We study the kernels K_{n,s}(z) in the remainder terms R_{n,s}(f) of the Gauss-Turan quadrature formulae for analytic functions on elliptical contours with foci at pm 1 , when the weight omega is a generalized Chebyshev weight function. For the generalized Chebyshev weight of the first (third) kind, it is shown that the modulus of the kernel \\vert K_{n,s}(z)\\vert attains its maximum on the real axis (positive real semi-axis) for each ngeq n_0, n_0Dn_0(rho,s) . It was stated as a conjecture in [Mathematics of Computation 72 (2003), 1855-1872]. For the generalized Chebyshev weight of the second kind, in the case when the number of the nodes n in the corresponding Gauss-Turan quadrature formula is even, it is shown that the modulus of the kernel attains its maximum on the imaginary axis for each ngeq n_0, n_0Dn_0(rho,s) . Numerical examples are included. Retrieve articles in all Journals with MSC (1991): [41]41A55, [42]65D30, [43]65D32

  5. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  6. Accurate cell counts in live mouse embryos using optical quadrature and differential interference contrast microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; Newmark, Judith A.; Zhao, Bing; Warner, Carol M.; DiMarzio, Charles A.

    2006-02-01

    Present imaging techniques used in in vitro fertilization (IVF) clinics are unable to produce accurate cell counts in developing embryos past the eight-cell stage. We have developed a method that has produced accurate cell counts in live mouse embryos ranging from 13-25 cells by combining Differential Interference Contrast (DIC) and Optical Quadrature Microscopy. Optical Quadrature Microscopy is an interferometric imaging modality that measures the amplitude and phase of the signal beam that travels through the embryo. The phase is transformed into an image of optical path length difference, which is used to determine the maximum optical path length deviation of a single cell. DIC microscopy gives distinct cell boundaries for cells within the focal plane when other cells do not lie in the path to the objective. Fitting an ellipse to the boundary of a single cell in the DIC image and combining it with the maximum optical path length deviation of a single cell creates an ellipsoidal model cell of optical path length deviation. Subtracting the model cell from the Optical Quadrature image will either show the optical path length deviation of the culture medium or reveal another cell underneath. Once all the boundaries are used in the DIC image, the subtracted Optical Quadrature image is analyzed to determine the cell boundaries of the remaining cells. The final cell count is produced when no more cells can be subtracted. We have produced exact cell counts on 5 samples, which have been validated by Epi-Fluorescence images of Hoechst stained nuclei.

  7. Rule-based modeling with Virtual Cell

    PubMed Central

    Schaff, James C.; Vasilescu, Dan; Moraru, Ion I.; Loew, Leslie M.; Blinov, Michael L.

    2016-01-01

    Summary: Rule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine. Availability and implementation: Available as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user’s computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge. Contact: vcell_support@uchc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27497444

  8. On Decision-Making Among Multiple Rule-Bases in Fuzzy Control Systems

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward; Jamshidi, Mo

    1997-01-01

    Intelligent control of complex multi-variable systems can be a challenge for single fuzzy rule-based controllers. This class of problems cam often be managed with less difficulty by distributing intelligent decision-making amongst a collection of rule-bases. Such an approach requires that a mechanism be chosen to ensure goal-oriented interaction between the multiple rule-bases. In this paper, a hierarchical rule-based approach is described. Decision-making mechanisms based on generalized concepts from single-rule-based fuzzy control are described. Finally, the effects of different aggregation operators on multi-rule-base decision-making are examined in a navigation control problem for mobile robots.

  9. An XML-Based Manipulation and Query Language for Rule-Based Information

    NASA Astrophysics Data System (ADS)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  10. Light-controlled resistors provide quadrature signal rejection for high-gain servo systems

    NASA Technical Reports Server (NTRS)

    Mc Cauley, D. D.

    1967-01-01

    Servo amplifier feedback system, in which the phase sensitive detection, low pass filtering, and multiplication functions required for quadrature rejection, are preformed by light-controlled photoresistors, eliminates complex circuitry. System increases gain, improves signal-to-noise ratio, and eliminates the necessity for compensation.

  11. Rule-based navigation control design for autonomous flight

    NASA Astrophysics Data System (ADS)

    Contreras, Hugo; Bassi, Danilo

    2008-04-01

    This article depicts a navigation control system design that is based on a set of rules in order to follow a desired trajectory. The full control of the aircraft considered here comprises: a low level stability control loop, based on classic PID controller and the higher level navigation whose main job is to exercise lateral control (course) and altitude control, trying to follow a desired trajectory. The rules and PID gains were adjusted systematically according to the result of flight simulation. In spite of its simplicity, the rule-based navigation control proved to be robust, even with big perturbation, like crossing winds.

  12. Design and implementation of quadrature bandpass sigma-delta modulator used in low-IF RF receiver

    NASA Astrophysics Data System (ADS)

    Ge, Binjie; Li, Yan; Yu, Hang; Feng, Xiaoxing

    2018-05-01

    This paper presents the design and implementation of quadrature bandpass sigma-delta modulator. A pole movement method for transforming real sigma-delta modulator to a quadrature one is proposed by detailed study of the relationship of noise-shaping center frequency and integrator pole position in sigma-delta modulator. The proposed modulator uses sampling capacitor sharing switched capacitor integrator, and achieves a very small feedback coefficient by a series capacitor network, and those two techniques can dramatically reduce capacitor area. Quantizer output-dependent dummy capacitor load for reference voltage buffer can compensate signal-dependent noise that is caused by load variation. This paper designs a quadrature bandpass Sigma-Delta modulator for 2.4 GHz low IF receivers that achieve 69 dB SNDR at 1 MHz BW and -1 MHz IF with 48 MHz clock. The chip is fabricated with SMIC 0.18 μm CMOS technology, it achieves a total power current of 2.1 mA, and the chip area is 0.48 mm2. Project supported by the National Natural Science Foundation of China (Nos. 61471245, U1201256), the Guangdong Province Foundation (No. 2014B090901031), and the Shenzhen Foundation (Nos. JCYJ20160308095019383, JSGG20150529160945187).

  13. Analysis of Rules for Islamic Inheritance Law in Indonesia Using Hybrid Rule Based Learning

    NASA Astrophysics Data System (ADS)

    Khosyi'ah, S.; Irfan, M.; Maylawati, D. S.; Mukhlas, O. S.

    2018-01-01

    Along with the development of human civilization in Indonesia, the changes and reform of Islamic inheritance law so as to conform to the conditions and culture cannot be denied. The distribution of inheritance in Indonesia can be done automatically by storing the rule of Islamic inheritance law in the expert system. In this study, we analyze the knowledge of experts in Islamic inheritance in Indonesia and represent it in the form of rules using rule-based Forward Chaining (FC) and Davis-Putman-Logemann-Loveland (DPLL) algorithms. By hybridizing FC and DPLL algorithms, the rules of Islamic inheritance law in Indonesia are clearly defined and measured. The rules were conceptually validated by some experts in Islamic laws and informatics. The results revealed that generally all rules were ready for use in an expert system.

  14. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  15. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  16. Target-Based Maintenance of Privacy Preserving Association Rules

    ERIC Educational Resources Information Center

    Ahluwalia, Madhu V.

    2011-01-01

    In the context of association rule mining, the state-of-the-art in privacy preserving data mining provides solutions for categorical and Boolean association rules but not for quantitative association rules. This research fills this gap by describing a method based on discrete wavelet transform (DWT) to protect input data privacy while preserving…

  17. Gaussian quadrature and lattice discretization of the Fermi-Dirac distribution for graphene.

    PubMed

    Oettinger, D; Mendoza, M; Herrmann, H J

    2013-07-01

    We construct a lattice kinetic scheme to study electronic flow in graphene. For this purpose, we first derive a basis of orthogonal polynomials, using as the weight function the ultrarelativistic Fermi-Dirac distribution at rest. Later, we use these polynomials to expand the respective distribution in a moving frame, for both cases, undoped and doped graphene. In order to discretize the Boltzmann equation and make feasible the numerical implementation, we reduce the number of discrete points in momentum space to 18 by applying a Gaussian quadrature, finding that the family of representative wave (2+1)-vectors, which satisfies the quadrature, reconstructs a honeycomb lattice. The procedure and discrete model are validated by solving the Riemann problem, finding excellent agreement with other numerical models. In addition, we have extended the Riemann problem to the case of different dopings, finding that by increasing the chemical potential the electronic fluid behaves as if it increases its effective viscosity.

  18. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  19. Cross-quadrature modulation with the Raman-induced Kerr effect

    NASA Astrophysics Data System (ADS)

    Levenson, M. D.; Holland, M. J.; Walls, D. F.; Manson, P. J.; Fisk, P. T. H.; Bachor, H. A.

    1991-08-01

    The Raman-enhanced third-order optical nonlinearity of calcite potentially can support resonant back-action-evading measurement of the optical-field amplitude. In a preliminary experiment, we have observed cross-quadrature modulation transfer between an amplitude-modulated pump beam and an unmodulated probe beam tuned near the Stokes frequency. The theory of Holland et al. [Phys. Rev. A 42, 2995 (1990)] is extended to the case for which intracavity losses are significant in an attempt to account for the observations.

  20. Study of quadrature FIR filters for extraction of low-frequency instantaneous information in biophysical signals

    NASA Astrophysics Data System (ADS)

    Arce-Guevara, Valdemar E.; Alba-Cadena, Alfonso; Mendez, Martín O.

    Quadrature bandpass filters take a real-valued signal and output an analytic signal from which the instantaneous amplitude and phase can be computed. For this reason, they represent a useful tool to extract time-varying, narrow-band information from electrophysiological signals such as electroencephalogram (EEG) or electrocardiogram. One of the defining characteristics of quadrature filters is its null response to negative frequencies. However, when the frequency band of interest is close to 0 Hz, a careless filter design could let through negative frequencies, producing distortions in the amplitude and phase of the output. In this work, three types of quadrature filters (Ideal, Gabor and Sinusoidal) have been evaluated using both artificial and real EEG signals. For the artificial signals, the performance of each filter was measured in terms of the distortion in amplitude and phase, and sensitivity to noise and bandwidth selection. For the real EEG signals, a qualitative evaluation of the dynamics of the synchronization between two EEG channels was performed. The results suggest that, while all filters under study behave similarly under noise, they differ in terms of their sensitivity to bandwidth choice. In this study, the Sinusoidal filter showed clear advantages for the estimation of low-frequency EEG synchronization.

  1. Parallel inferencing method and apparatus for rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M. (Inventor); Moldovan, Dan (Inventor); Kuo, Steve (Inventor)

    1993-01-01

    The invention analyzes areas of conditions with an expert knowledge base of rules using plural separate nodes which fire respective rules of said knowledge base, each of said rules upon being fired altering certain of said conditions predicated upon the existence of other said conditions. The invention operates by constructing a P representation of all pairs of said rules which are input dependent or output dependent; constructing a C representation of all pairs of said rules which are communication dependent or input dependent; determining which of the rules are ready to fire by matching the predicate conditions of each rule with the conditions of said set; enabling said node means to simultaneously fire those of the rules ready to fire which are defined by said P representation as being free of input and output dependencies; and communicating from each node enabled by said enabling step the alteration of conditions by the corresponding rule to other nodes whose rules are defined by said C matrix means as being input or communication dependent upon the rule of said enabled node.

  2. Redundancy checking algorithms based on parallel novel extension rule

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Yang, Yang; Li, Guangli; Wang, Qi; Lü, Shuai

    2017-05-01

    Redundancy checking (RC) is a key knowledge reduction technology. Extension rule (ER) is a new reasoning method, first presented in 2003 and well received by experts at home and abroad. Novel extension rule (NER) is an improved ER-based reasoning method, presented in 2009. In this paper, we first analyse the characteristics of the extension rule, and then present a simple algorithm for redundancy checking based on extension rule (RCER). In addition, we introduce MIMF, a type of heuristic strategy. Using the aforementioned rule and strategy, we design and implement RCHER algorithm, which relies on MIMF. Next we design and implement an RCNER (redundancy checking based on NER) algorithm based on NER. Parallel computing greatly accelerates the NER algorithm, which has weak dependence among tasks when executed. Considering this, we present PNER (parallel NER) and apply it to redundancy checking and necessity checking. Furthermore, we design and implement the RCPNER (redundancy checking based on PNER) and NCPPNER (necessary clause partition based on PNER) algorithms as well. The experimental results show that MIMF significantly influences the acceleration of algorithm RCER in formulae on a large scale and high redundancy. Comparing PNER with NER and RCPNER with RCNER, the average speedup can reach up to the number of task decompositions when executed. Comparing NCPNER with the RCNER-based algorithm on separating redundant formulae, speedup increases steadily as the scale of the formulae is incrementing. Finally, we describe the challenges that the extension rule will be faced with and suggest possible solutions.

  3. Field-quadrature and photon-number correlations produced by parametric processes.

    PubMed

    McKinstrie, C J; Karlsson, M; Tong, Z

    2010-09-13

    In a previous paper [Opt. Express 13, 4986 (2005)], formulas were derived for the field-quadrature and photon-number variances produced by multiple-mode parametric processes. In this paper, formulas are derived for the quadrature and number correlations. The number formulas are used to analyze the properties of basic devices, such as two-mode amplifiers, attenuators and frequency convertors, and composite systems made from these devices, such as cascaded parametric amplifiers and communication links. Amplifiers generate idlers that are correlated with the amplified signals, or correlate pre-existing pairs of modes, whereas attenuators decorrelate pre-existing modes. Both types of device modify the signal-to-noise ratios (SNRs) of the modes on which they act. Amplifiers decrease or increase the mode SNRs, depending on whether they are operated in phase-insensitive (PI) or phase-sensitive (PS) manners, respectively, whereas attenuators always decrease these SNRs. Two-mode PS links are sequences of transmission fibers (attenuators) followed by two-mode PS amplifiers. Not only do these PS links have noise figures that are 6-dB lower than those of the corresponding PI links, they also produce idlers that are (almost) completely correlated with the signals. By detecting the signals and idlers, one can eliminate the effects of electronic noise in the detectors.

  4. A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application

    NASA Astrophysics Data System (ADS)

    Roy, Sounak; Banerjee, Swapna

    2018-03-01

    This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.

  5. A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application

    NASA Astrophysics Data System (ADS)

    Roy, Sounak; Banerjee, Swapna

    2018-06-01

    This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.

  6. Cognitive changes in conjunctive rule-based category learning: An ERP approach.

    PubMed

    Rabi, Rahel; Joanisse, Marc F; Zhu, Tianshu; Minda, John Paul

    2018-06-25

    When learning rule-based categories, sufficient cognitive resources are needed to test hypotheses, maintain the currently active rule in working memory, update rules after feedback, and to select a new rule if necessary. Prior research has demonstrated that conjunctive rules are more complex than unidimensional rules and place greater demands on executive functions like working memory. In our study, event-related potentials (ERPs) were recorded while participants performed a conjunctive rule-based category learning task with trial-by-trial feedback. In line with prior research, correct categorization responses resulted in a larger stimulus-locked late positive complex compared to incorrect responses, possibly indexing the updating of rule information in memory. Incorrect trials elicited a pronounced feedback-locked P300 elicited which suggested a disconnect between perception, and the rule-based strategy. We also examined the differential processing of stimuli that were able to be correctly classified by the suboptimal single-dimensional rule ("easy" stimuli) versus those that could only be correctly classified by the optimal, conjunctive rule ("difficult" stimuli). Among strong learners, a larger, late positive slow wave emerged for difficult compared with easy stimuli, suggesting differential processing of category items even though strong learners performed well on the conjunctive category set. Overall, the findings suggest that ERP combined with computational modelling can be used to better understand the cognitive processes involved in rule-based category learning.

  7. Beam shape coefficients calculation for an elliptical Gaussian beam with 1-dimensional quadrature and localized approximation methods

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Shen, Jianqi

    2018-06-01

    The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.

  8. Bounds of the error of Gauss-Turan-type quadratures

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.; Spalevic, Miodrag M.

    2005-06-01

    We consider the remainder term of the Gauss-Turan quadrature formulaefor analytic functions in some region of the complex plane containing the interval [-1,1] in its interior. The remainder term is presented in the form of a contour integral over confocal ellipses or circles. A strong error analysis is given for the case with a generalized class of weight functions, introduced recently by Gori and Micchelli. Also, we discuss a general case with an even weight function defined on [-1,1]. Numerical results are included.

  9. WellnessRules: A Web 3.0 Case Study in RuleML-Based Prolog-N3 Profile Interoperation

    NASA Astrophysics Data System (ADS)

    Boley, Harold; Osmun, Taylor Michael; Craig, Benjamin Larry

    An interoperation study, WellnessRules, is described, where rules about wellness opportunities are created by participants in rule languages such as Prolog and N3, and translated within a wellness community using RuleML/XML. The wellness rules are centered around participants, as profiles, encoding knowledge about their activities conditional on the season, the time-of-day, the weather, etc. This distributed knowledge base extends FOAF profiles with a vocabulary and rules about wellness group networking. The communication between participants is organized through Rule Responder, permitting wellness-profile translation and distributed querying across engines. WellnessRules interoperates between rules and queries in the relational (Datalog) paradigm of the pure-Prolog subset of POSL and in the frame (F-logic) paradigm of N3. An evaluation of Rule Responder instantiated for WellnessRules revealed acceptable Web response times.

  10. Personalization of Rule-based Web Services.

    PubMed

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  11. Low-Latitude Solar Wind During the Fall 1998 SOHO-Ulysses Quadrature

    NASA Technical Reports Server (NTRS)

    Poletto, G.; Suess, Steven T.; Biesecker, D.; Esser, R.; Gloeckler, G.; Zurbuchen, T.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The Fall 1998 SOlar-Heliospheric Observatory (SOHO) - Ulysses quadrature occurred when Ulysses was at 5.2 AU, 17.4 deg South of the equator, and off the West line of the Sun. SOHO coronal observations, at heliocentric distances of a few solar radii, showed that the line through the solar center and Ulysses crossed, over the first days of observations, a dark, weakly emitting area and through the northern edge of a streamer complex during the second half of the quadrature campaign. Ulysses in situ observations showed this transition to correspond to a decrease from higher speed wind typical of coronal hole flow to low speed wind. Physical parameters (density, temperature, flow speed) of the low latitude coronal plasma sampled over the campaign are determined using constraints from what is the same plasma measured later in situ and simulating the intensities of the Hydrogen Lyman-alpha and OVI 1032 and 1037 Angstrom lines, measured by the Ultra Violet Coronagraph Spectrometer (UVCS) on SOHO. The densities, temperatures and outflow speed are compared with the same characteristic flow parameters for high-latitude fast wind streams and typical slow solar wind.

  12. Error analysis in some Gauss-Turan-Radau and Gauss-Turan-Lobatto quadratures for analytic functions

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.; Spalevic, Miodrag M.

    2004-03-01

    We consider the generalized Gauss-Turan quadrature formulae of Radau and Lobatto type for approximating . The aim of this paper is to analyze the remainder term in the case when f is an analytic function in some region of the complex plane containing the interval [-1,1] in its interior. The remainder term is presented in the form of a contour integral over confocal ellipses (cf. SIAM J. Numer. Anal. 80 (1983) 1170). Sufficient conditions on the convergence for some of such quadratures, associated with the generalized Chebyshev weight functions, are found. Using some ideas from Hunter (BIT 35 (1995) 64) we obtain new estimates of the remainder term, which are very exact. Some numerical results and illustrations are shown.

  13. Movement rules for individual-based models of stream fish

    Treesearch

    Steven F. Railsback; Roland H. Lamberson; Bret C. Harvey; Walter E. Duffy

    1999-01-01

    Abstract - Spatially explicit individual-based models (IBMs) use movement rules to determine when an animal departs its current location and to determine its movement destination; these rules are therefore critical to accurate simulations. Movement rules typically define some measure of how an individual's expected fitness varies among locations, under the...

  14. Combination Rules for Morse-Based van der Waals Force Fields.

    PubMed

    Yang, Li; Sun, Lei; Deng, Wei-Qiao

    2018-02-15

    In traditional force fields (FFs), van der Waals interactions have been usually described by the Lennard-Jones potentials. Conventional combination rules for the parameters of van der Waals (VDW) cross-termed interactions were developed for the Lennard-Jones based FFs. Here, we report that the Morse potentials were a better function to describe VDW interactions calculated by highly precise quantum mechanics methods. A new set of combination rules was developed for Morse-based FFs, in which VDW interactions were described by Morse potentials. The new set of combination rules has been verified by comparing the second virial coefficients of 11 noble gas mixtures. For all of the mixed binaries considered in this work, the combination rules work very well and are superior to all three other existing sets of combination rules reported in the literature. We further used the Morse-based FF by using the combination rules to simulate the adsorption isotherms of CH 4 at 298 K in four covalent-organic frameworks (COFs). The overall agreement is great, which supports the further applications of this new set of combination rules in more realistic simulation systems.

  15. Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.

    PubMed

    Opitz, Bertram; Hofmann, Juliane

    2015-03-01

    A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Elementary test for nonclassicality based on measurements of position and momentum

    NASA Astrophysics Data System (ADS)

    Fresta, Luca; Borregaard, Johannes; Sørensen, Anders S.

    2015-12-01

    We generalize a nonclassicality test described by Kot et al. [Phys. Rev. Lett. 108, 233601 (2012), 10.1103/PhysRevLett.108.233601], which can be used to rule out any classical description of a physical system. The test is based on measurements of quadrature operators and works by proving a contradiction with the classical description in terms of a probability distribution in phase space. As opposed to the previous work, we generalize the test to include states without rotational symmetry in phase space. Furthermore, we compare the performance of the nonclassicality test with classical tomography methods based on the inverse Radon transform, which can also be used to establish the quantum nature of a physical system. In particular, we consider a nonclassicality test based on the so-called filtered back-projection formula. We show that the general nonclassicality test is conceptually simpler, requires less assumptions on the system, and is statistically more reliable than the tests based on the filtered back-projection formula. As a specific example, we derive the optimal test for quadrature squeezed single-photon states and show that the efficiency of the test does not change with the degree of squeezing.

  17. Revised Interim Final Consolidated Enforcement Response and Penalty Policy for the Pre-Renovation Education Rule; Renovation, Repair and Painting Rule; and Lead-Based Paint Activities Rule

    EPA Pesticide Factsheets

    This is the revised version of the Interim Final Consolidated Enforcement Response and Penalty Policy for the Pre-Renovation Education Rule; Renovation, Repair and Painting Rule; and Lead-Based Paint Activities Rule.

  18. Organizational Knowledge Transfer Using Ontologies and a Rule-Based System

    NASA Astrophysics Data System (ADS)

    Okabe, Masao; Yoshioka, Akiko; Kobayashi, Keido; Yamaguchi, Takahira

    In recent automated and integrated manufacturing, so-called intelligence skill is becoming more and more important and its efficient transfer to next-generation engineers is one of the urgent issues. In this paper, we propose a new approach without costly OJT (on-the-job training), that is, combinational usage of a domain ontology, a rule ontology and a rule-based system. Intelligence skill can be decomposed into pieces of simple engineering rules. A rule ontology consists of these engineering rules as primitives and the semantic relations among them. A domain ontology consists of technical terms in the engineering rules and the semantic relations among them. A rule ontology helps novices get the total picture of the intelligence skill and a domain ontology helps them understand the exact meanings of the engineering rules. A rule-based system helps domain experts externalize their tacit intelligence skill to ontologies and also helps novices internalize them. As a case study, we applied our proposal to some actual job at a remote control and maintenance office of hydroelectric power stations in Tokyo Electric Power Co., Inc. We also did an evaluation experiment for this case study and the result supports our proposal.

  19. An investigation of care-based vs. rule-based morality in frontotemporal dementia, Alzheimer's disease, and healthy controls.

    PubMed

    Carr, Andrew R; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S; Mather, Michelle; Jimenez, Elvira E; Thompson, Paul; Mendez, Mario F

    2015-11-01

    Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer's disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules. Published by Elsevier Ltd.

  20. SOHO-Ulysses Coordinated Studies During the Two Extended Quadratures and the Alignment of 2007-2008

    NASA Technical Reports Server (NTRS)

    Suess, S. T.; Poletto, G.

    2007-01-01

    During SOHO-Sun-Ulysses quadratures the geometry of the configuration makes it possible to sample "in situ" the plasma parcels that are remotely observed in the corona. Although the quadrature position occurs at a well defined instant in time, we typically take data while Ulysses is within +/- 5 degrees of the limb, with the understanding that plasma sampled by Ulysses over this time interval can all be traced to its source in the corona. The relative positions of SOHO and Ulysses in winter 2007 (19 Dec 2006-28 May 2007) are unusual: the SOHO-Sun-Ulysses included angle is always between 85 and 95 degrees - the quadrature lasts for 5 months! This provides an opportunity for extended observations of specific observing objectives. In addition, in summer 2007, Ulysses (at 1.34 AU) is in near-radial alignment with Earth/ACE/Wind and SOHO, allowing us to analyze radial gradients and propagation in the solar wind and inner heliosphere. Our own quadrature campaigns rely heavily on LASCO and UVCS coronal observations: LASCO giving the overall context above 2 solar radii while the UVCS spectrograph acquired data from - 1.5 to, typically, 4-5 solar radii. In the past, coronal parameters have been derived from data acquired by these two experiments and compared with "in situ" data of Ulysses' SWOOPS and SWICS. Data from other experiments like EIT, CDS, SUMER, Sac Peak Fe XIV maps, magnetic field maps from the Wilcox solar magnetograph, MLSO, from MDI, and from the Ulysses magnetograph experiment have been, and will be, used to complement LASCO/UVCS/SWOOPS and SWICS data. We anticipate that observations by ACE/WIND/STEREO/Hinode and other missions will be relevant as well. During the IHY campaigns, Ulysses will be 52-80 degrees south in winter 2007, near sunspot minimum. Hence, our own scientific objective will be to sample high speed wind or regions of transition between slow and fast wind. This might be a very interesting situation - not met in previous quadratures - allowing

  1. An Embedded Rule-Based Diagnostic Expert System in Ada

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Liberman, Eugene M.

    1992-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  2. Implementing a Commercial Rule Base as a Medication Order Safety Net

    PubMed Central

    Reichley, Richard M.; Seaton, Terry L.; Resetar, Ervina; Micek, Scott T.; Scott, Karen L.; Fraser, Victoria J.; Dunagan, W. Claiborne; Bailey, Thomas C.

    2005-01-01

    A commercial rule base (Cerner Multum) was used to identify medication orders exceeding recommended dosage limits at five hospitals within BJC HealthCare, an integrated health care system. During initial testing, clinical pharmacists determined that there was an excessive number of nuisance and clinically insignificant alerts, with an overall alert rate of 9.2%. A method for customizing the commercial rule base was implemented to increase rule specificity for problematic rules. The system was subsequently deployed at two facilities and achieved alert rates of less than 1%. Pharmacists screened these alerts and contacted ordering physicians in 21% of cases. Physicians made therapeutic changes in response to 38% of alerts presented to them. By applying simple techniques to customize rules, commercial rule bases can be used to rapidly deploy a safety net to screen drug orders for excessive dosages, while preserving the rule architecture for later implementations of more finely tuned clinical decision support. PMID:15802481

  3. Electronically Tunable Differential Integrator: Linear Voltage Controlled Quadrature Oscillator.

    PubMed

    Nandi, Rabindranath; Pattanayak, Sandhya; Venkateswaran, Palaniandavar; Das, Sagarika

    2015-01-01

    A new electronically tunable differential integrator (ETDI) and its extension to voltage controlled quadrature oscillator (VCQO) design with linear tuning law are proposed; the active building block is a composite current feedback amplifier with recent multiplication mode current conveyor (MMCC) element. Recently utilization of two different kinds of active devices to form a composite building block is being considered since it yields a superior functional element suitable for improved quality circuit design. The integrator time constant (τ) and the oscillation frequency (ω o ) are tunable by the control voltage (V) of the MMCC block. Analysis indicates negligible phase error (θ e ) for the integrator and low active ω o -sensitivity relative to the device parasitic capacitances. Satisfactory experimental verifications on electronic tunability of some wave shaping applications by the integrator and a double-integrator feedback loop (DIFL) based sinusoid oscillator with linear f o variation range of 60 KHz~1.8 MHz at low THD of 2.1% are verified by both simulation and hardware tests.

  4. RB-ARD: A proof of concept rule-based abort

    NASA Technical Reports Server (NTRS)

    Smith, Richard; Marinuzzi, John

    1987-01-01

    The Abort Region Determinator (ARD) is a console program in the space shuttle mission control center. During shuttle ascent, the Flight Dynamics Officer (FDO) uses the ARD to determine the possible abort modes and make abort calls for the crew. The goal of the Rule-based Abort region Determinator (RB/ARD) project was to test the concept of providing an onboard ARD for the shuttle or an automated ARD for the mission control center (MCC). A proof of concept rule-based system was developed on a LMI Lambda computer using PICON, a knowdedge-based system shell. Knowdedge derived from documented flight rules and ARD operation procedures was coded in PICON rules. These rules, in conjunction with modules of conventional code, enable the RB-ARD to carry out key parts of the ARD task. Current capabilities of the RB-ARD include: continuous updating of the available abort mode, recognition of a limited number of main engine faults and recommendation of safing actions. Safing actions recommended by the RB-ARD concern the Space Shuttle Main Engine (SSME) limit shutdown system and powerdown of the SSME Ac buses.

  5. Knowledge base rule partitioning design for CLIPS

    NASA Technical Reports Server (NTRS)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  6. Inference in fuzzy rule bases with conflicting evidence

    NASA Technical Reports Server (NTRS)

    Koczy, Laszlo T.

    1992-01-01

    Inference based on fuzzy 'If ... then' rules has played a very important role since when Zadeh proposed the Compositional Rule of Inference and, especially, since the first successful application presented by Mamdani. From the mid-1980's when the 'fuzzy boom' started in Japan, numerous industrial applications appeared, all using simplified techniques because of the high levels of computational complexity. Another feature is that antecedents in the rules are distributed densely in the input space, so the conclusion can be calculated by some weighted combination of the consequents of the matching (fired) rules. The CRI works in the following way: If R is a rule and A* is an observation, the conclusion is computed by B* = R o A* (o stands for the max-min composition). Algorithms implementing this idea directly have an exponential time complexity (maybe the problem is NP-hard) as the rules are relations in X x Y, a k1 x k2 dimensional space, if X is k1, Y is k2 dimensional. The simplified techniques usually decompose the relation into k1 projections in X(sub i) and measure in some way the degree of similarity between observation and antecedent by some parameter of the overlapping. These parameters are aggregated to a single value in (0,1) which is applied as a resulting weight for the given rule. The projections of rules in dimensions Y(sub i) are weighted by these aggregated values and then they are combined in order to obtain a resulting conclusion separately in every dimension. This method is unapplicable with sparse bases as there is no guarantee that an arbitrary observation matches with any of the antecedents. Then, the degree of similarity is 0 and all consequents are weighted by 0. Some considerations for such a situation are summarized in the next sections.

  7. Noise-cancelling quadrature magnetic position, speed and direction sensor

    DOEpatents

    Preston, Mark A.; King, Robert D.

    1996-01-01

    An array of three magnetic sensors in a single package is employed with a single bias magnet for sensing shaft position, speed and direction of a motor in a high magnetic noise environment. Two of the three magnetic sensors are situated in an anti-phase relationship (i.e., 180.degree. out-of-phase) with respect to the relationship between the other of the two sensors and magnetically salient target, and the third magnetic sensor is situated between the anti-phase sensors. The result is quadrature sensing with noise immunity for accurate relative position, speed and direction measurements.

  8. The May 1997 SOHO-Ulysses Quadrature

    NASA Technical Reports Server (NTRS)

    Suess, Steven T.; Poletto, G.; Romoli, M.; Neugebauer, M.; Goldstein, B. E.; Simnett, G.

    2000-01-01

    We present results from the May 1997 SOHO-Ulysses quadrature, near sunspot minimum. Ulysses was at 5.1 AU, 100 north of the solar equator, and off the east limb. It was, by chance, also at the very northern edge of the streamer belt. Nevertheless, SWOOPS detected only slow, relatively smooth wind and there was no direct evidence of fast wind from the northern polar coronal hole or of mixing with fast wind. LASCO images show that the streamer belt at 10 N was narrow and sharp at the beginning and end of the two week observation interval, but broadened in the middle. A corresponding change in density, but not flow speed, occurred at Ulysses. Coronal densities derived from UVCS show that physical parameters in the lower corona are closely related to those in the solar wind, both over quiet intervals and in transient events on the limb. One small transient observed by both LASCO and UVCS is analyzed in detail.

  9. Parallel-quadrature phase-shifting digital holographic microscopy using polarization beam splitter

    PubMed Central

    Das, Bhargab; Yelleswarapu, Chandra S; Rao, DVGLN

    2012-01-01

    We present a digital holography microscopy technique based on parallel-quadrature phase-shifting method. Two π/2 phase-shifted holograms are recorded simultaneously using polarization phase-shifting principle, slightly off-axis recording geometry, and two identical CCD sensors. The parallel phase-shifting is realized by combining circularly polarized object beam with a 45° degree polarized reference beam through a polarizing beam splitter. DC term is eliminated by subtracting the two holograms from each other and the object information is reconstructed after selecting the frequency spectrum of the real image. Both amplitude and phase object reconstruction results are presented. Simultaneous recording eliminates phase errors caused by mechanical vibrations and air turbulences. The slightly off-axis recording geometry with phase-shifting allows a much larger dimension of the spatial filter for reconstruction of the object information. This leads to better reconstruction capability than traditional off-axis holography. PMID:23109732

  10. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    NASA Astrophysics Data System (ADS)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  11. Free vibration analysis of a robotic fish based on a continuous and non-uniform flexible backbone with distributed masses

    NASA Astrophysics Data System (ADS)

    Coral, W.; Rossi, C.; Curet, O. M.

    2015-12-01

    This paper presents a Differential Quadrature Element Method for free transverse vibration of a robotic fish based on a continuous and non-uniform flexible backbone with distributed masses (fish ribs). The proposed method is based on the theory of a Timoshenko cantilever beam. The effects of the masses (number, magnitude and position) on the value of natural frequencies are investigated. Governing equations, compatibility and boundary conditions are formulated according to the Differential Quadrature rules. The convergence, efficiency and accuracy are compared to other analytical solution proposed in the literature. Moreover, the proposed method has been validate against the physical prototype of a flexible fish backbone. The main advantages of this method, compared to the exact solutions available in the literature are twofold: first, smaller computational cost and second, it allows analysing the free vibration in beams whose section is an arbitrary function, which is normally difficult or even impossible with other analytical methods.

  12. An Investigation of Care-Based vs. Rule-Based Morality in Frontotemporal Dementia, Alzheimer’s Disease, and Healthy Controls

    PubMed Central

    Carr, Andrew R.; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S.; Mather, Michelle; Jimenez, Elvira E.; Thompson, Paul; Mendez, Mario F.

    2015-01-01

    Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer’s disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules. PMID:26432341

  13. Efficiency in Rule- vs. Plan-Based Movements Is Modulated by Action-Mode

    PubMed Central

    Scheib, Jean P. P.; Stoll, Sarah; Thürmer, J. Lukas; Randerath, Jennifer

    2018-01-01

    The rule/plan motor cognition (RPMC) paradigm elicits visually indistinguishable motor outputs, resulting from either plan- or rule-based action-selection, using a combination of essentially interchangeable stimuli. Previous implementations of the RPMC paradigm have used pantomimed movements to compare plan- vs. rule-based action-selection. In the present work we attempt to determine the generalizability of previous RPMC findings to real object interaction by use of a grasp-to-rotate task. In the plan task, participants had to use prospective planning to achieve a comfortable post-handle rotation hand posture. The rule task used implementation intentions (if-then rules) leading to the same comfortable end-state. In Experiment A, we compare RPMC performance of 16 healthy participants in pantomime and real object conditions of the experiment, within-subjects. Higher processing efficiency of rule- vs. plan-based action-selection was supported by diffusion model analysis. Results show a significant response-time increase in the pantomime condition compared to the real object condition and a greater response-time advantage of rule-based vs. plan-based actions in the pantomime compared to the real object condition. In Experiment B, 24 healthy participants performed the real object RPMC task in a task switching vs. a blocked condition. Results indicate that plan-based action-selection leads to longer response-times and less efficient information processing than rule-based action-selection in line with previous RPMC findings derived from the pantomime action-mode. Particularly in the task switching mode, responses were faster in the rule compared to the plan task suggesting a modulating influence of cognitive load. Overall, results suggest an advantage of rule-based action-selection over plan-based action-selection; whereby differential mechanisms appear to be involved depending on the action-mode. We propose that cognitive load is a factor that modulates the advantageous

  14. Efficiency in Rule- vs. Plan-Based Movements Is Modulated by Action-Mode.

    PubMed

    Scheib, Jean P P; Stoll, Sarah; Thürmer, J Lukas; Randerath, Jennifer

    2018-01-01

    The rule/plan motor cognition (RPMC) paradigm elicits visually indistinguishable motor outputs, resulting from either plan- or rule-based action-selection, using a combination of essentially interchangeable stimuli. Previous implementations of the RPMC paradigm have used pantomimed movements to compare plan- vs. rule-based action-selection. In the present work we attempt to determine the generalizability of previous RPMC findings to real object interaction by use of a grasp-to-rotate task. In the plan task, participants had to use prospective planning to achieve a comfortable post-handle rotation hand posture. The rule task used implementation intentions (if-then rules) leading to the same comfortable end-state. In Experiment A, we compare RPMC performance of 16 healthy participants in pantomime and real object conditions of the experiment, within-subjects. Higher processing efficiency of rule- vs. plan-based action-selection was supported by diffusion model analysis. Results show a significant response-time increase in the pantomime condition compared to the real object condition and a greater response-time advantage of rule-based vs. plan-based actions in the pantomime compared to the real object condition. In Experiment B, 24 healthy participants performed the real object RPMC task in a task switching vs. a blocked condition. Results indicate that plan-based action-selection leads to longer response-times and less efficient information processing than rule-based action-selection in line with previous RPMC findings derived from the pantomime action-mode. Particularly in the task switching mode, responses were faster in the rule compared to the plan task suggesting a modulating influence of cognitive load. Overall, results suggest an advantage of rule-based action-selection over plan-based action-selection; whereby differential mechanisms appear to be involved depending on the action-mode. We propose that cognitive load is a factor that modulates the advantageous

  15. New developments of the Extended Quadrature Method of Moments to solve Population Balance Equations

    NASA Astrophysics Data System (ADS)

    Pigou, Maxime; Morchain, Jérôme; Fede, Pascal; Penet, Marie-Isabelle; Laronze, Geoffrey

    2018-07-01

    Population Balance Models have a wide range of applications in many industrial fields as they allow accounting for heterogeneity among properties which are crucial for some system modelling. They actually describe the evolution of a Number Density Function (NDF) using a Population Balance Equation (PBE). For instance, they are applied to gas-liquid columns or stirred reactors, aerosol technology, crystallisation processes, fine particles or biological systems. There is a significant interest for fast, stable and accurate numerical methods in order to solve for PBEs, a class of such methods actually does not solve directly the NDF but resolves their moments. These methods of moments, and in particular quadrature-based methods of moments, have been successfully applied to a variety of systems. Point-wise values of the NDF are sometimes required but are not directly accessible from the moments. To address these issues, the Extended Quadrature Method of Moments (EQMOM) has been developed in the past few years and approximates the NDF, from its moments, as a convex mixture of Kernel Density Functions (KDFs) of the same parametric family. In the present work EQMOM is further developed on two aspects. The main one is a significant improvement of the core iterative procedure of that method, the corresponding reduction of its computational cost is estimated to range from 60% up to 95%. The second aspect is an extension of EQMOM to two new KDFs used for the approximation, the Weibull and the Laplace kernels. All MATLAB source codes used for this article are provided with this article.

  16. Rule-Based and Information-Integration Category Learning in Normal Aging

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M.

    2010-01-01

    The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated…

  17. Raman-noise-induced quantum limits for χ(3) nondegenerate phase-sensitive amplification and quadrature squeezing

    NASA Astrophysics Data System (ADS)

    Voss, Paul L.; Köprülü, Kahraman G.; Kumar, Prem

    2006-04-01

    We present a quantum theory of nondegenerate phase-sensitive parametric amplification in a χ(3) nonlinear medium. The nonzero response time of the Kerr (χ(3)) nonlinearity determines the quantum-limited noise figure of χ(3) parametric amplification, as well as the limit on quadrature squeezing. This nonzero response time of the nonlinearity requires coupling of the parametric process to a molecular vibration phonon bath, causing the addition of excess noise through spontaneous Raman scattering. We present analytical expressions for the quantum-limited noise figure of frequency nondegenerate and frequency degenerate χ(3) parametric amplifiers operated as phase-sensitive amplifiers. We also present results for frequency nondegenerate quadrature squeezing. We show that our nondegenerate squeezing theory agrees with the degenerate squeezing theory of Boivin and Shapiro as degeneracy is approached. We have also included the effect of linear loss on the phase-sensitive process.

  18. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  19. Evaluation of a rule base for decision making in general practice.

    PubMed Central

    Essex, B; Healy, M

    1994-01-01

    BACKGROUND. Decision making in general practice relies heavily on judgmental expertise. It should be possible to codify this expertise into rules and principles. AIM. A study was undertaken to evaluate the effectiveness, of rules from a rule base designed to improve students' and trainees' management decisions relating to patients seen in general practice. METHOD. The rule base was developed after studying decisions about and management of thousands of patients seen in one general practice over an eight year period. Vignettes were presented to 93 fourth year medical students and 179 general practitioner trainees. They recorded their perception and management of each case before and after being presented with a selection of relevant rules. Participants also commented on their level of agreement with each of the rules provided with the vignettes. A panel of five independent assessors then rated as good, acceptable or poor, the participants' perception and management of each case before and after seeing the rules. RESULTS. Exposure to a few selected rules of thumb improved the problem perception and management decisions of both undergraduates and trainees. The degree of improvement was not related to previous experience or to the stated level of agreement with the proposed rules. The assessors identified difficulties students and trainees experienced in changing their perceptions and management decisions when the rules suggested options they had not considered. CONCLUSION. The rules developed to improve decision making skills in general practice are effective when used with vignettes. The next phase is to transform the rule base into an expert system to train students and doctors to acquire decision making skills. It could also be used to provide decision support when confronted with difficult management decisions in general practice. PMID:8204334

  20. A rule-based automatic sleep staging method.

    PubMed

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Timescale analysis of rule-based biochemical reaction networks

    PubMed Central

    Klinke, David J.; Finley, Stacey D.

    2012-01-01

    The flow of information within a cell is governed by a series of protein-protein interactions that can be described as a reaction network. Mathematical models of biochemical reaction networks can be constructed by repetitively applying specific rules that define how reactants interact and what new species are formed upon reaction. To aid in understanding the underlying biochemistry, timescale analysis is one method developed to prune the size of the reaction network. In this work, we extend the methods associated with timescale analysis to reaction rules instead of the species contained within the network. To illustrate this approach, we applied timescale analysis to a simple receptor-ligand binding model and a rule-based model of Interleukin-12 (IL-12) signaling in näive CD4+ T cells. The IL-12 signaling pathway includes multiple protein-protein interactions that collectively transmit information; however, the level of mechanistic detail sufficient to capture the observed dynamics has not been justified based upon the available data. The analysis correctly predicted that reactions associated with JAK2 and TYK2 binding to their corresponding receptor exist at a pseudo-equilibrium. In contrast, reactions associated with ligand binding and receptor turnover regulate cellular response to IL-12. An empirical Bayesian approach was used to estimate the uncertainty in the timescales. This approach complements existing rank- and flux-based methods that can be used to interrogate complex reaction networks. Ultimately, timescale analysis of rule-based models is a computational tool that can be used to reveal the biochemical steps that regulate signaling dynamics. PMID:21954150

  2. Detection of Interference Phase by Digital Computation of Quadrature Signals in Homodyne Laser Interferometry

    PubMed Central

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-01-01

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems). PMID:23202038

  3. Detection of interference phase by digital computation of quadrature signals in homodyne laser interferometry.

    PubMed

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-10-19

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems).

  4. Simulating Rule-Based Systems

    DTIC Science & Technology

    1988-12-01

    the number of facts. NFIRE : location of the rule status flag. NLVL: the number of levels. NRULE: the number of rules. 116 NRUN: the number of runs. PD...C-INITIALIZE THE RULE MATRIX 124 C NFIRE -4 +MAXL+MAXR NFAC-0 IL-MAXL-MINL+l IR-MAXR-MINR+ 1 DO 10 I-1,NRULE MR(I,1)=I NR( I, 2)-MINL+INT(RAN( II) *IL...NR( I, 3)-MINR+INT(RAN(II)*IR) NR( I, NFIRE )-0 10 CONTINUE-A C C-STORE THE RANDOM-ASSERTION SET IN A MATRIX C READ(8,* )NRUN, lASS DO 120 I-1,NRTJN

  5. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  6. Rule Based Category Learning in Patients with Parkinson’s Disease

    PubMed Central

    Price, Amanda; Filoteo, J. Vincent; Maddox, W. Todd

    2009-01-01

    Measures of explicit rule-based category learning are commonly used in neuropsychological evaluation of individuals with Parkinson’s disease (PD) and the pattern of PD performance on these measures tends to be highly varied. We review the neuropsychological literature to clarify the manner in which PD affects the component processes of rule-based category learning and work to identify and resolve discrepancies within this literature. In particular, we address the manner in which PD and its common treatments affect the processes of rule generation, maintenance, shifting and selection. We then integrate the neuropsychological research with relevant neuroimaging and computational modeling evidence to clarify the neurobiological impact of PD on each process. Current evidence indicates that neurochemical changes associated with PD primarily disrupt rule shifting, and may disturb feedback-mediated learning processes that guide rule selection. Although surgical and pharmacological therapies remediate this deficit, it appears that the same treatments may contribute to impaired rule generation, maintenance and selection processes. These data emphasize the importance of distinguishing between the impact of PD and its common treatments when considering the neuropsychological profile of the disease. PMID:19428385

  7. A hybrid learning method for constructing compact rule-based fuzzy models.

    PubMed

    Zhao, Wanqing; Niu, Qun; Li, Kang; Irwin, George W

    2013-12-01

    The Takagi–Sugeno–Kang-type rule-based fuzzy model has found many applications in different fields; a major challenge is, however, to build a compact model with optimized model parameters which leads to satisfactory model performance. To produce a compact model, most existing approaches mainly focus on selecting an appropriate number of fuzzy rules. In contrast, this paper considers not only the selection of fuzzy rules but also the structure of each rule premise and consequent, leading to the development of a novel compact rule-based fuzzy model. Here, each fuzzy rule is associated with two sets of input attributes, in which the first is used for constructing the rule premise and the other is employed in the rule consequent. A new hybrid learning method combining the modified harmony search method with a fast recursive algorithm is hereby proposed to determine the structure and the parameters for the rule premises and consequents. This is a hard mixed-integer nonlinear optimization problem, and the proposed hybrid method solves the problem by employing an embedded framework, leading to a significantly reduced number of model parameters and a small number of fuzzy rules with each being as simple as possible. Results from three examples are presented to demonstrate the compactness (in terms of the number of model parameters and the number of rules) and the performance of the fuzzy models obtained by the proposed hybrid learning method, in comparison with other techniques from the literature.

  8. Building a common pipeline for rule-based document classification.

    PubMed

    Patterson, Olga V; Ginter, Thomas; DuVall, Scott L

    2013-01-01

    Instance-based classification of clinical text is a widely used natural language processing task employed as a step for patient classification, document retrieval, or information extraction. Rule-based approaches rely on concept identification and context analysis in order to determine the appropriate class. We propose a five-step process that enables even small research teams to develop simple but powerful rule-based NLP systems by taking advantage of a common UIMA AS based pipeline for classification. Our proposed methodology coupled with the general-purpose solution provides researchers with access to the data locked in clinical text in cases of limited human resources and compact timelines.

  9. Opinion evolution based on cellular automata rules in small world networks

    NASA Astrophysics Data System (ADS)

    Shi, Xiao-Ming; Shi, Lun; Zhang, Jie-Fang

    2010-03-01

    In this paper, we apply cellular automata rules, which can be given by a truth table, to human memory. We design each memory as a tracking survey mode that keeps the most recent three opinions. Each cellular automata rule, as a personal mechanism, gives the final ruling in one time period based on the data stored in one's memory. The key focus of the paper is to research the evolution of people's attitudes to the same question. Based on a great deal of empirical observations from computer simulations, all the rules can be classified into 20 groups. We highlight the fact that the phenomenon shown by some rules belonging to the same group will be altered within several steps by other rules in different groups. It is truly amazing that, compared with the last hundreds of presidential voting in America, the eras of important events in America's history coincide with the simulation results obtained by our model.

  10. Hierarchical graphs for rule-based modeling of biochemical systems

    PubMed Central

    2011-01-01

    Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR) complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for specifying rule-based models

  11. Digitally generated excitation and near-baseband quadrature detection of rapid scan EPR signals.

    PubMed

    Tseitlin, Mark; Yu, Zhelin; Quine, Richard W; Rinard, George A; Eaton, Sandra S; Eaton, Gareth R

    2014-12-01

    The use of multiple synchronized outputs from an arbitrary waveform generator (AWG) provides the opportunity to perform EPR experiments differently than by conventional EPR. We report a method for reconstructing the quadrature EPR spectrum from periodic signals that are generated with sinusoidal magnetic field modulation such as continuous wave (CW), multiharmonic, or rapid scan experiments. The signal is down-converted to an intermediate frequency (IF) that is less than the field scan or field modulation frequency and then digitized in a single channel. This method permits use of a high-pass analog filter before digitization to remove the strong non-EPR signal at the IF, that might otherwise overwhelm the digitizer. The IF is the difference between two synchronized X-band outputs from a Tektronix AWG 70002A, one of which is for excitation and the other is the reference for down-conversion. To permit signal averaging, timing was selected to give an exact integer number of full cycles for each frequency. In the experiments reported here the IF was 5kHz and the scan frequency was 40kHz. To produce sinusoidal rapid scans with a scan frequency eight times IF, a third synchronized output generated a square wave that was converted to a sine wave. The timing of the data acquisition with a Bruker SpecJet II was synchronized by an external clock signal from the AWG. The baseband quadrature signal in the frequency domain was reconstructed. This approach has the advantages that (i) the non-EPR response at the carrier frequency is eliminated, (ii) both real and imaginary EPR signals are reconstructed from a single physical channel to produce an ideal quadrature signal, and (iii) signal bandwidth does not increase relative to baseband detection. Spectra were obtained by deconvolution of the reconstructed signals for solid BDPA (1,3-bisdiphenylene-2-phenylallyl) in air, 0.2mM trityl OX63 in water, 15 N perdeuterated tempone, and a nitroxide with a 0.5G partially-resolved proton

  12. A Rule Based Approach to ISS Interior Volume Control and Layout

    NASA Technical Reports Server (NTRS)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  13. Evolving rule-based systems in two medical domains using genetic programming.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan; Axer, Hubertus; Bjerregaard, Beth; von Keyserlingk, Diedrich Graf

    2004-11-01

    To demonstrate and compare the application of different genetic programming (GP) based intelligent methodologies for the construction of rule-based systems in two medical domains: the diagnosis of aphasia's subtypes and the classification of pap-smear examinations. Past data representing (a) successful diagnosis of aphasia's subtypes from collaborating medical experts through a free interview per patient, and (b) correctly classified smears (images of cells) by cyto-technologists, previously stained using the Papanicolaou method. Initially a hybrid approach is proposed, which combines standard genetic programming and heuristic hierarchical crisp rule-base construction. Then, genetic programming for the production of crisp rule based systems is attempted. Finally, another hybrid intelligent model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results denote the effectiveness of the proposed systems, while they are also compared for their efficiency, accuracy and comprehensibility, to those of an inductive machine learning approach as well as to those of a standard genetic programming symbolic expression approach. The proposed GP-based intelligent methodologies are able to produce accurate and comprehensible results for medical experts performing competitive to other intelligent approaches. The aim of the authors was the production of accurate but also sensible decision rules that could potentially help medical doctors to extract conclusions, even at the expense of a higher classification score achievement.

  14. Two integrator loop quadrature oscillators: A review.

    PubMed

    Soliman, Ahmed M

    2013-01-01

    A review of the two integrator loop oscillator circuits providing two quadrature sinusoidal output voltages is given. All the circuits considered employ the minimum number of capacitors namely two except one circuit which uses three capacitors. The circuits considered are classified to four different classes. The first class includes floating capacitors and floating resistors and the active building blocks realizing these circuits are the Op Amp or the OTRA. The second class employs grounded capacitors and includes floating resistors and the active building blocks realizing these circuits are the DCVC or the unity gain cells or the CFOA. The third class employs grounded capacitors and grounded resistors and the active building blocks realizing these circuits are the CCII. The fourth class employs grounded capacitors and no resistors and the active building blocks realizing these circuits are the TA. Transformation methods showing the generation of different classes from each other is given in details and this is one of the main objectives of this paper.

  15. A method of extracting impervious surface based on rule algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Shuangyun; Hong, Liang; Xu, Quanli

    2018-02-01

    The impervious surface has become an important index to evaluate the urban environmental quality and measure the development level of urbanization. At present, the use of remote sensing technology to extract impervious surface has become the main way. In this paper, a method to extract impervious surface based on rule algorithm is proposed. The main ideas of the method is to use the rule-based algorithm to extract impermeable surface based on the characteristics and the difference which is between the impervious surface and the other three types of objects (water, soil and vegetation) in the seven original bands, NDWI and NDVI. The steps can be divided into three steps: 1) Firstly, the vegetation is extracted according to the principle that the vegetation is higher in the near-infrared band than the other bands; 2) Then, the water is extracted according to the characteristic of the water with the highest NDWI and the lowest NDVI; 3) Finally, the impermeable surface is extracted based on the fact that the impervious surface has a higher NDWI value and the lowest NDVI value than the soil.In order to test the accuracy of the rule algorithm, this paper uses the linear spectral mixed decomposition algorithm, the CART algorithm, the NDII index algorithm for extracting the impervious surface based on six remote sensing image of the Dianchi Lake Basin from 1999 to 2014. Then, the accuracy of the above three methods is compared with the accuracy of the rule algorithm by using the overall classification accuracy method. It is found that the extraction method based on the rule algorithm is obviously higher than the above three methods.

  16. Identification of rheumatoid arthritis and osteoarthritis patients by transcriptome-based rule set generation

    PubMed Central

    2014-01-01

    Introduction Discrimination of rheumatoid arthritis (RA) patients from patients with other inflammatory or degenerative joint diseases or healthy individuals purely on the basis of genes differentially expressed in high-throughput data has proven very difficult. Thus, the present study sought to achieve such discrimination by employing a novel unbiased approach using rule-based classifiers. Methods Three multi-center genome-wide transcriptomic data sets (Affymetrix HG-U133 A/B) from a total of 79 individuals, including 20 healthy controls (control group - CG), as well as 26 osteoarthritis (OA) and 33 RA patients, were used to infer rule-based classifiers to discriminate the disease groups. The rules were ranked with respect to Kiendl’s statistical relevance index, and the resulting rule set was optimized by pruning. The rule sets were inferred separately from data of one of three centers and applied to the two remaining centers for validation. All rules from the optimized rule sets of all centers were used to analyze their biological relevance applying the software Pathway Studio. Results The optimized rule sets for the three centers contained a total of 29, 20, and 8 rules (including 10, 8, and 4 rules for ‘RA’), respectively. The mean sensitivity for the prediction of RA based on six center-to-center tests was 96% (range 90% to 100%), that for OA 86% (range 40% to 100%). The mean specificity for RA prediction was 94% (range 80% to 100%), that for OA 96% (range 83.3% to 100%). The average overall accuracy of the three different rule-based classifiers was 91% (range 80% to 100%). Unbiased analyses by Pathway Studio of the gene sets obtained by discrimination of RA from OA and CG with rule-based classifiers resulted in the identification of the pathogenetically and/or therapeutically relevant interferon-gamma and GM-CSF pathways. Conclusion First-time application of rule-based classifiers for the discrimination of RA resulted in high performance, with means

  17. Induced polarization of volcanic rocks - 1. Surface versus quadrature conductivity

    NASA Astrophysics Data System (ADS)

    Revil, A.; Le Breton, M.; Niu, Q.; Wallin, E.; Haskins, E.; Thomas, D. M.

    2017-02-01

    We performed complex conductivity measurements on 28 core samples from the hole drilled for the Humu'ula Groundwater Research Project (Hawai'i Island, HI, USA). The complex conductivity measurements were performed at 4 different pore water conductivities (0.07, 0.5, 1.0 or 2.0, and 10 S m-1 prepared with NaCl) over the frequency range 1 mHz to 45 kHz at 22 ± 1 °C. The in-phase conductivity data are plotted against the pore water conductivity to determine, sample by sample, the intrinsic formation factor and the surface conductivity. The intrinsic formation factor is related to porosity by Archie's law with an average value of the cementation exponent m of 2.45, indicating that only a small fraction of the connected pore space controls the transport properties. Both the surface and quadrature conductivities are found to be linearly related to the cation exchange capacity of the material, which was measured with the cobalt hexamine chloride method. Surface and quadrature conductivities are found to be proportional to each other like for sedimentary siliclastic rocks. A Stern layer polarization model is used to explain these experimental results. Despite the fact that the samples contain some magnetite (up to 5 per cent wt.), we were not able to identify the effect of this mineral on the complex conductivity spectra. These results are very encouraging in showing that galvanometric induced polarization measurements can be used in volcanic areas to separate the bulk from the surface conductivity and therefore to define some alteration attributes. Such a goal cannot be achieved with resistivity alone.

  18. A self-learning rule base for command following in dynamical systems

    NASA Technical Reports Server (NTRS)

    Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander

    1992-01-01

    In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

  19. Quadrature formula for evaluating left bounded Hadamard type hypersingular integrals

    NASA Astrophysics Data System (ADS)

    Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Nik Long, N. M. A.; Okhunov, Abdurahim

    2014-12-01

    Left semi-bounded Hadamard type Hypersingular integral (HSI) of the form H(h,x) = 1/π √{1+x/1-x }∫-1 **1√{1-t/1+t }h(t)/(t-x)2 dt,x∈(-1.1), Where h(t) is a smooth function is considered. The automatic quadrature scheme (AQS) is constructed by approximating the density function h(t) by the truncated Chebyshev polynomials of the fourth kind. Numerical results revealed that the proposed AQS is highly accurate when h(t) is choosing to be the polynomial and rational functions. The results are in line with the theoretical findings.

  20. Implementation of artificial intelligence rules in a data base management system

    NASA Technical Reports Server (NTRS)

    Feyock, S.

    1986-01-01

    The intelligent front end prototype was transformed into a RIM-integrated system. A RIM-based expert system was written which demonstrated the developed capability. The use of rules to produce extensibility of the intelligent front end, including the concept of demons and rule manipulation rules were investigated. Innovative approaches such as syntax programming were to be considered.

  1. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models

  2. Optical-wireless-optical full link for polarization multiplexing quadrature amplitude/phase modulation signal transmission.

    PubMed

    Li, Xinying; Yu, Jianjun; Chi, Nan; Zhang, Junwen

    2013-11-15

    We propose and experimentally demonstrate an optical wireless integration system at the Q-band, in which up to 40 Gb/s polarization multiplexing multilevel quadrature amplitude/phase modulation (PM-QAM) signal can be first transmitted over 20 km single-mode fiber-28 (SMF-28), then delivered over a 2 m 2 × 2 multiple-input multiple-output wireless link, and finally transmitted over another 20 km SMF-28. The PM-QAM modulated wireless millimeter-wave (mm-wave) signal at 40 GHz is generated based on the remote heterodyning technique, and demodulated by the radio-frequency transparent photonic technique based on homodyne coherent detection and baseband digital signal processing. The classic constant modulus algorithm equalization is used at the receiver to realize polarization demultiplexing of the PM-QAM signal. For the first time, to the best of our knowledge, we realize the conversion of the PM-QAM modulated wireless mm-wave signal to the optical signal as well as 20 km fiber transmission of the converted optical signal.

  3. Evaluation of the non-Gaussianity of two-mode entangled states over a bosonic memory channel via cumulant theory and quadrature detection

    NASA Astrophysics Data System (ADS)

    Xiang, Shao-Hua; Wen, Wei; Zhao, Yu-Jing; Song, Ke-Hui

    2018-04-01

    We study the properties of the cumulants of multimode boson operators and introduce the phase-averaged quadrature cumulants as the measure of the non-Gaussianity of multimode quantum states. Using this measure, we investigate the non-Gaussianity of two classes of two-mode non-Gaussian states: photon-number entangled states and entangled coherent states traveling in a bosonic memory quantum channel. We show that such a channel can skew the distribution of two-mode quadrature variables, giving rise to a strongly non-Gaussian correlation. In addition, we provide a criterion to determine whether the distributions of these states are super- or sub-Gaussian.

  4. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  5. RuleGO: a logical rules-based tool for description of gene groups by means of Gene Ontology

    PubMed Central

    Gruca, Aleksandra; Sikora, Marek; Polanski, Andrzej

    2011-01-01

    Genome-wide expression profiles obtained with the use of DNA microarray technology provide abundance of experimental data on biological and molecular processes. Such amount of data need to be further analyzed and interpreted in order to obtain biological conclusions on the basis of experimental results. The analysis requires a lot of experience and is usually time-consuming process. Thus, frequently various annotation databases are used to improve the whole process of analysis. Here, we present RuleGO—the web-based application that allows the user to describe gene groups on the basis of logical rules that include Gene Ontology (GO) terms in their premises. Presented application allows obtaining rules that reflect coappearance of GO-terms describing genes supported by the rules. The ontology level and number of coappearing GO-terms is adjusted in automatic manner. The user limits the space of possible solutions only. The RuleGO application is freely available at http://rulego.polsl.pl/. PMID:21715384

  6. Rule-based mechanisms of learning for intelligent adaptive flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.

  7. A continuum mechanics-based musculo-mechanical model for esophageal transport

    NASA Astrophysics Data System (ADS)

    Kou, Wenjun; Griffith, Boyce E.; Pandolfino, John E.; Kahrilas, Peter J.; Patankar, Neelesh A.

    2017-11-01

    In this work, we extend our previous esophageal transport model using an immersed boundary (IB) method with discrete fiber-based structural model, to one using a continuum mechanics-based model that is approximated based on finite elements (IB-FE). To deal with the leakage of flow when the Lagrangian mesh becomes coarser than the fluid mesh, we employ adaptive interaction quadrature points to deal with Lagrangian-Eulerian interaction equations based on a previous work (Griffith and Luo [1]). In particular, we introduce a new anisotropic adaptive interaction quadrature rule. The new rule permits us to vary the interaction quadrature points not only at each time-step and element but also at different orientations per element. This helps to avoid the leakage issue without sacrificing the computational efficiency and accuracy in dealing with the interaction equations. For the material model, we extend our previous fiber-based model to a continuum-based model. We present formulations for general fiber-reinforced material models in the IB-FE framework. The new material model can handle non-linear elasticity and fiber-matrix interactions, and thus permits us to consider more realistic material behavior of biological tissues. To validate our method, we first study a case in which a three-dimensional short tube is dilated. Results on the pressure-displacement relationship and the stress distribution matches very well with those obtained from the implicit FE method. We remark that in our IB-FE case, the three-dimensional tube undergoes a very large deformation and the Lagrangian mesh-size becomes about 6 times of Eulerian mesh-size in the circumferential orientation. To validate the performance of the method in handling fiber-matrix material models, we perform a second study on dilating a long fiber-reinforced tube. Errors are small when we compare numerical solutions with analytical solutions. The technique is then applied to the problem of esophageal transport. We use two

  8. Oxytocin conditions trait-based rule adherence

    PubMed Central

    De Dreu, Carsten K.W.

    2017-01-01

    Abstract Rules, whether in the form of norms, taboos or laws, regulate and coordinate human life. Some rules, however, are arbitrary and adhering to them can be personally costly. Rigidly sticking to such rules can be considered maladaptive. Here, we test whether, at the neurobiological level, (mal)adaptive rule adherence is reduced by oxytocin—a hypothalamic neuropeptide that biases the biobehavioural approach-avoidance system. Participants (N = 139) self-administered oxytocin or placebo intranasally, and reported their need for structure and approach-avoidance sensitivity. Next, participants made binary decisions and were given an arbitrary rule that demanded to forgo financial benefits. Under oxytocin, participants violated the rule more often, especially when they had high need for structure and high approach sensitivity. Possibly, oxytocin dampens the need for a highly structured environment and enables individuals to flexibly trade-off internal desires against external restrictions. Implications for the treatment of clinical disorders marked by maladaptive rule adherence are discussed. PMID:27664999

  9. RANWAR: rank-based weighted association rule mining from gene expression and methylation data.

    PubMed

    Mallik, Saurav; Mukhopadhyay, Anirban; Maulik, Ujjwal

    2015-01-01

    Ranking of association rules is currently an interesting topic in data mining and bioinformatics. The huge number of evolved rules of items (or, genes) by association rule mining (ARM) algorithms makes confusion to the decision maker. In this article, we propose a weighted rule-mining technique (say, RANWAR or rank-based weighted association rule-mining) to rank the rules using two novel rule-interestingness measures, viz., rank-based weighted condensed support (wcs) and weighted condensed confidence (wcc) measures to bypass the problem. These measures are basically depended on the rank of items (genes). Using the rank, we assign weight to each item. RANWAR generates much less number of frequent itemsets than the state-of-the-art association rule mining algorithms. Thus, it saves time of execution of the algorithm. We run RANWAR on gene expression and methylation datasets. The genes of the top rules are biologically validated by Gene Ontologies (GOs) and KEGG pathway analyses. Many top ranked rules extracted from RANWAR that hold poor ranks in traditional Apriori, are highly biologically significant to the related diseases. Finally, the top rules evolved from RANWAR, that are not in Apriori, are reported.

  10. Rule-based topology system for spatial databases to validate complex geographic datasets

    NASA Astrophysics Data System (ADS)

    Martinez-Llario, J.; Coll, E.; Núñez-Andrés, M.; Femenia-Ribera, C.

    2017-06-01

    A rule-based topology software system providing a highly flexible and fast procedure to enforce integrity in spatial relationships among datasets is presented. This improved topology rule system is built over the spatial extension Jaspa. Both projects are open source, freely available software developed by the corresponding author of this paper. Currently, there is no spatial DBMS that implements a rule-based topology engine (considering that the topology rules are designed and performed in the spatial backend). If the topology rules are applied in the frontend (as in many GIS desktop programs), ArcGIS is the most advanced solution. The system presented in this paper has several major advantages over the ArcGIS approach: it can be extended with new topology rules, it has a much wider set of rules, and it can mix feature attributes with topology rules as filters. In addition, the topology rule system can work with various DBMSs, including PostgreSQL, H2 or Oracle, and the logic is performed in the spatial backend. The proposed topology system allows users to check the complex spatial relationships among features (from one or several spatial layers) that require some complex cartographic datasets, such as the data specifications proposed by INSPIRE in Europe and the Land Administration Domain Model (LADM) for Cadastral data.

  11. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  12. Simulation of operating rules and discretional decisions using a fuzzy rule-based system integrated into a water resources management model

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2013-04-01

    Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total

  13. Developing a modular architecture for creation of rule-based clinical diagnostic criteria.

    PubMed

    Hong, Na; Pathak, Jyotishman; Chute, Christopher G; Jiang, Guoqian

    2016-01-01

    With recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies. The architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation. Our efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.

  14. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  15. Rule-based support system for multiple UMLS semantic type assignments

    PubMed Central

    Geller, James; He, Zhe; Perl, Yehoshua; Morrey, C. Paul; Xu, Julia

    2012-01-01

    Background When new concepts are inserted into the UMLS, they are assigned one or several semantic types from the UMLS Semantic Network by the UMLS editors. However, not every combination of semantic types is permissible. It was observed that many concepts with rare combinations of semantic types have erroneous semantic type assignments or prohibited combinations of semantic types. The correction of such errors is resource-intensive. Objective We design a computational system to inform UMLS editors as to whether a specific combination of two, three, four, or five semantic types is permissible or prohibited or questionable. Methods We identify a set of inclusion and exclusion instructions in the UMLS Semantic Network documentation and derive corresponding rule-categories as well as rule-categories from the UMLS concept content. We then design an algorithm adviseEditor based on these rule-categories. The algorithm specifies rules for an editor how to proceed when considering a tuple (pair, triple, quadruple, quintuple) of semantic types to be assigned to a concept. Results Eight rule-categories were identified. A Web-based system was developed to implement the adviseEditor algorithm, which returns for an input combination of semantic types whether it is permitted, prohibited or (in a few cases) requires more research. The numbers of semantic type pairs assigned to each rule-category are reported. Interesting examples for each rule-category are illustrated. Cases of semantic type assignments that contradict rules are listed, including recently introduced ones. Conclusion The adviseEditor system implements explicit and implicit knowledge available in the UMLS in a system that informs UMLS editors about the permissibility of a desired combination of semantic types. Using adviseEditor might help accelerate the work of the UMLS editors and prevent erroneous semantic type assignments. PMID:23041716

  16. Oxytocin conditions trait-based rule adherence.

    PubMed

    Gross, Jörg; De Dreu, Carsten K W

    2017-03-01

    Rules, whether in the form of norms, taboos or laws, regulate and coordinate human life. Some rules, however, are arbitrary and adhering to them can be personally costly. Rigidly sticking to such rules can be considered maladaptive. Here, we test whether, at the neurobiological level, (mal)adaptive rule adherence is reduced by oxytocin-a hypothalamic neuropeptide that biases the biobehavioural approach-avoidance system. Participants (N = 139) self-administered oxytocin or placebo intranasally, and reported their need for structure and approach-avoidance sensitivity. Next, participants made binary decisions and were given an arbitrary rule that demanded to forgo financial benefits. Under oxytocin, participants violated the rule more often, especially when they had high need for structure and high approach sensitivity. Possibly, oxytocin dampens the need for a highly structured environment and enables individuals to flexibly trade-off internal desires against external restrictions. Implications for the treatment of clinical disorders marked by maladaptive rule adherence are discussed. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks

    PubMed Central

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-01-01

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  18. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  19. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    NASA Astrophysics Data System (ADS)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  20. Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications

    PubMed Central

    Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304

  1. Classification based on pruning and double covered rule sets for the internet of things applications.

    PubMed

    Li, Shasha; Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

  2. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  3. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    NASA Astrophysics Data System (ADS)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  4. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    PubMed

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  5. Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    PubMed Central

    Stover, Lori J.; Nair, Niketh S.; Faeder, James R.

    2014-01-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory

  6. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings

  7. Rule-Based Category Learning in Down Syndrome

    ERIC Educational Resources Information Center

    Phillips, B. Allyson; Conners, Frances A.; Merrill, Edward; Klinger, Mark R.

    2014-01-01

    Rule-based category learning was examined in youths with Down syndrome (DS), youths with intellectual disability (ID), and typically developing (TD) youths. Two tasks measured category learning: the Modified Card Sort task (MCST) and the Concept Formation test of the Woodcock-Johnson-III (Woodcock, McGrew, & Mather, 2001). In regression-based…

  8. Techniques and implementation of the embedded rule-based expert system using Ada

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  9. Double-Referential Holography and Spatial Quadrature Amplitude Modulation

    NASA Astrophysics Data System (ADS)

    Zukeran, Keisuke; Okamoto, Atsushi; Takabayashi, Masanori; Shibukawa, Atsushi; Sato, Kunihiro; Tomita, Akihisa

    2013-09-01

    We proposed a double-referential holography (DRH) that allows phase-detection without external additional beams. In the DRH, phantom beams, prepared in the same optical path as signal beams and preliminary multiplexed in a recording medium along with the signal, are used to produce interference fringes on an imager for converting a phase into an intensity distribution. The DRH enables stable and high-accuracy phase detection independent of the fluctuations and vibrations of the optical system owing to medium shift and temperature variation. Besides, the collinear arrangement of the signal and phantom beams leads to the compactness of the optical data storage system. We conducted an experiment using binary phase modulation signals for verifying the DRH operation. In addition, 38-level spatial quadrature amplitude modulation signals were successfully reproduced with the DRH by numerical simulation. Furthermore, we verified that the distributed phase-shifting method moderates the dynamic range consumption for the exposure of phantom beams.

  10. Traditional versus rule-based programming techniques - Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.

  11. Evaluating the potential for quantitative monitoring of in situ chemical oxidation of aqueous-phase TCE using in-phase and quadrature electrical conductivity

    NASA Astrophysics Data System (ADS)

    Hort, R. D.; Revil, A.; Munakata-Marr, J.; Mao, D.

    2015-07-01

    Electrical resistivity measurements can potentially be used to remotely monitor fate and transport of ionic oxidants such as permanganate (MnO4-) during in situ chemical oxidation (ISCO) of contaminants like trichloroethene (TCE). Time-lapse two-dimensional bulk conductivity and induced polarization surveys conducted during a sand tank ISCO simulation demonstrated that MnO4- plume movement could be monitored in a qualitative manner using bulk conductivity tomograms, although chargeability was below sensitivity limits. We also examined changes to in-phase and quadrature electrical conductivity resulting from ion injection, MnO2 and Cl- production, and pH change during TCE and humate oxidation by MnO4- in homogeneous aqueous solutions and saturated porous media samples. Data from the homogeneous samples demonstrated that inversion of the sand tank resistivity data using a common Tikhonov regularization approach was insufficient to recover an accurate conductivity distribution within the tank. While changes to in-phase conductivity could be successfully modeled, quadrature conductivity values could not be directly related to TCE oxidation product or MnO4- concentrations at frequencies consistent with field induced polarization surveys, limiting the utility of quadrature conductivity for monitoring ISCO.

  12. Rule based artificial intelligence expert system for determination of upper extremity impairment rating.

    PubMed

    Lim, I; Walkup, R K; Vannier, M W

    1993-04-01

    Quantitative evaluation of upper extremity impairment, a percentage rating most often determined using a rule based procedure, has been implemented on a personal computer using an artificial intelligence, rule-based expert system (AI system). In this study, the rules given in Chapter 3 of the AMA Guides to the Evaluation of Permanent Impairment (Third Edition) were used to develop such an AI system for the Apple Macintosh. The program applies the rules from the Guides in a consistent and systematic fashion. It is faster and less error-prone than the manual method, and the results have a higher degree of precision, since intermediate values are not truncated.

  13. Rule-Based Category Learning in Children: The Role of Age and Executive Functioning

    PubMed Central

    Rabi, Rahel; Minda, John Paul

    2014-01-01

    Rule-based category learning was examined in 4–11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning. PMID:24489658

  14. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  15. GraDit: graph-based data repair algorithm for multiple data edits rule violations

    NASA Astrophysics Data System (ADS)

    Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

  16. Modeling for (physical) biologists: an introduction to the rule-based approach

    PubMed Central

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-01-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  17. Genetic learning in rule-based and neural systems

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  18. A rule-based smart automated fertilization and irrigation systems

    NASA Astrophysics Data System (ADS)

    Yousif, Musab El-Rashid; Ghafar, Khairuddin; Zahari, Rahimi; Lim, Tiong Hoo

    2018-04-01

    Smart automation in industries has become very important as it can improve the reliability and efficiency of the systems. The use of smart technologies in agriculture have increased over the year to ensure and control the production of crop and address food security. However, it is important to use proper irrigation systems avoid water wastage and overfeeding of the plant. In this paper, a Smart Rule-based Automated Fertilization and Irrigation System is proposed and evaluated. We propose a rule based decision making algorithm to monitor and control the food supply to the plant and the soil quality. A build-in alert system is also used to update the farmer using a text message. The system is developed and evaluated using a real hardware.

  19. An architecture for rule based system explanation

    NASA Technical Reports Server (NTRS)

    Fennel, T. R.; Johannes, James D.

    1990-01-01

    A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.

  20. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    NASA Astrophysics Data System (ADS)

    Mihai, Gabroveanu

    2015-09-01

    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  1. Creating an ontology driven rules base for an expert system for medical diagnosis.

    PubMed

    Bertaud Gounot, Valérie; Donfack, Valéry; Lasbleiz, Jérémy; Bourde, Annabel; Duvauferrier, Régis

    2011-01-01

    Expert systems of the 1980s have failed on the difficulties of maintaining large rule bases. The current work proposes a method to achieve and maintain rule bases grounded on ontologies (like NCIT). The process described here for an expert system on plasma cell disorder encompasses extraction of a sub-ontology and automatic and comprehensive generation of production rules. The creation of rules is not based directly on classes, but on individuals (instances). Instances can be considered as prototypes of diseases formally defined by "destrictions" in the ontology. Thus, it is possible to use this process to make diagnoses of diseases. The perspectives of this work are considered: the process described with an ontology formalized in OWL1 can be extended by using an ontology in OWL2 and allow reasoning about numerical data in addition to symbolic data.

  2. HERB: A production system for programming with hierarchical expert rule bases: User's manual, HERB Version 1. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummel, K.E.

    1987-12-01

    Expert systems are artificial intelligence programs that solve problems requiring large amounts of heuristic knowledge, based on years of experience and tradition. Production systems are domain-independent tools that support the development of rule-based expert systems. This document describes a general purpose production system known as HERB. This system was developed to support the programming of expert systems using hierarchically structured rule bases. HERB encourages the partitioning of rules into multiple rule bases and supports the use of multiple conflict resolution strategies. Multiple rule bases can also be placed on a system stack and simultaneously searched during each interpreter cycle. Bothmore » backward and forward chaining rules are supported by HERB. The condition portion of each rule can contain both patterns, which are matched with facts in a data base, and LISP expressions, which are explicitly evaluated in the LISP environment. Properties of objects can also be stored in the HERB data base and referenced within the scope of each rule. This document serves both as an introduction to the principles of LISP-based production systems and as a user's manual for the HERB system. 6 refs., 17 figs.« less

  3. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, M.; Hu, N. Q.; Qin, G. J.

    2011-07-01

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  4. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization.

    PubMed

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-03-15

    Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are

  5. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    PubMed

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier

  6. Reservoir adaptive operating rules based on both of historical streamflow and future projections

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Liu, Pan; Wang, Hao; Chen, Jie; Lei, Xiaohui; Feng, Maoyuan

    2017-10-01

    Climate change is affecting hydrological variables and consequently is impacting water resources management. Historical strategies are no longer applicable under climate change. Therefore, adaptive management, especially adaptive operating rules for reservoirs, has been developed to mitigate the possible adverse effects of climate change. However, to date, adaptive operating rules are generally based on future projections involving uncertainties under climate change, yet ignoring historical information. To address this, we propose an approach for deriving adaptive operating rules considering both historical information and future projections, namely historical and future operating rules (HAFOR). A robustness index was developed by comparing benefits from HAFOR with benefits from conventional operating rules (COR). For both historical and future streamflow series, maximizations of both average benefits and the robustness index were employed as objectives, and four trade-offs were implemented to solve the multi-objective problem. Based on the integrated objective, the simulation-based optimization method was used to optimize the parameters of HAFOR. Using the Dongwushi Reservoir in China as a case study, HAFOR was demonstrated to be an effective and robust method for developing adaptive operating rules under the uncertain changing environment. Compared with historical or projected future operating rules (HOR or FPOR), HAFOR can reduce the uncertainty and increase the robustness for future projections, especially regarding results of reservoir releases and volumes. HAFOR, therefore, facilitates adaptive management in the context that climate change is difficult to predict accurately.

  7. Adaptive Square-Root Cubature-Quadrature Kalman Particle Filter for satellite attitude determination using vector observations

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Pourtakdoust, Seid H.

    2014-12-01

    A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.

  8. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  9. Pushing the rules: effects and aftereffects of deliberate rule violations.

    PubMed

    Wirth, Robert; Pfister, Roland; Foerster, Anna; Huestegge, Lynn; Kunde, Wilfried

    2016-09-01

    Most of our daily life is organized around rules and social norms. But what makes rules so special? And what if one were to break a rule intentionally? Can we simply free us from the present set of rules or do we automatically adhere to them? How do rule violations influence subsequent behavior? To investigate the effects and aftereffects of violating simple S-R rule, we conducted three experiments that investigated continuous finger-tracking responses on an iPad. Our experiments show that rule violations are distinct from rule-based actions in both response times and movement trajectories, they take longer to initiate and execute, and their movement trajectory is heavily contorted. Data not only show differences between the two types of response (rule-based vs. violation), but also yielded a characteristic pattern of aftereffects in case of rule violations: rule violations do not trigger adaptation effects that render further rule violations less difficult, but every rule violation poses repeated effort on the agent. The study represents a first step towards understanding the signature and underlying mechanisms of deliberate rule violations, they cannot be acted out by themselves, but require the activation of the original rule first. Consequently, they are best understood as reformulations of existing rules that are not accessible on their own, but need to be constantly derived from the original rule, with an add-on that might entail an active tendency to steer away from mental representations that reflect (socially) unwanted behavior.

  10. Association-rule-based tuberculosis disease diagnosis

    NASA Astrophysics Data System (ADS)

    Asha, T.; Natarajan, S.; Murthy, K. N. B.

    2010-02-01

    Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. It usually spreads through the air and attacks low immune bodies such as patients with Human Immunodeficiency Virus (HIV). This work focuses on finding close association rules, a promising technique in Data Mining, within TB data. The proposed method first normalizes of raw data from medical records which includes categorical, nominal and continuous attributes and then determines Association Rules from the normalized data with different support and confidence. Association rules are applied on a real data set containing medical records of patients with TB obtained from a state hospital. The rules determined describes close association between one symptom to another; as an example, likelihood that an occurrence of sputum is closely associated with blood cough and HIV.

  11. Information entropy of Gegenbauer polynomials and Gaussian quadrature

    NASA Astrophysics Data System (ADS)

    Sánchez-Ruiz, Jorge

    2003-05-01

    In a recent paper (Buyarov V S, López-Artés P, Martínez-Finkelshtein A and Van Assche W 2000 J. Phys. A: Math. Gen. 33 6549-60), an efficient method was provided for evaluating in closed form the information entropy of the Gegenbauer polynomials C(lambda)n(x) in the case when lambda = l in Bbb N. For given values of n and l, this method requires the computation by means of recurrence relations of two auxiliary polynomials, P(x) and H(x), of degrees 2l - 2 and 2l - 4, respectively. Here it is shown that P(x) is related to the coefficients of the Gaussian quadrature formula for the Gegenbauer weights wl(x) = (1 - x2)l-1/2, and this fact is used to obtain the explicit expression of P(x). From this result, an explicit formula is also given for the polynomial S(x) = limnrightarrowinfty P(1 - x/(2n2)), which is relevant to the study of the asymptotic (n rightarrow infty with l fixed) behaviour of the entropy.

  12. A robust two-node, 13 moment quadrature method of moments for dilute particle flows including wall bouncing

    NASA Astrophysics Data System (ADS)

    Sun, Dan; Garmory, Andrew; Page, Gary J.

    2017-02-01

    For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.

  13. Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.

    PubMed

    von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg

    2014-01-01

    Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment.

  14. Enumeration of Ring–Chain Tautomers Based on SMIRKS Rules

    PubMed Central

    2015-01-01

    A compound exhibits (prototropic) tautomerism if it can be represented by two or more structures that are related by a formal intramolecular movement of a hydrogen atom from one heavy atom position to another. When the movement of the proton is accompanied by the opening or closing of a ring it is called ring–chain tautomerism. This type of tautomerism is well observed in carbohydrates, but it also occurs in other molecules such as warfarin. In this work, we present an approach that allows for the generation of all ring–chain tautomers of a given chemical structure. Based on Baldwin’s Rules estimating the likelihood of ring closure reactions to occur, we have defined a set of transform rules covering the majority of ring–chain tautomerism cases. The rules automatically detect substructures in a given compound that can undergo a ring–chain tautomeric transformation. Each transformation is encoded in SMIRKS line notation. All work was implemented in the chemoinformatics toolkit CACTVS. We report on the application of our ring–chain tautomerism rules to a large database of commercially available screening samples in order to identify ring–chain tautomers. PMID:25158156

  15. Neural substrates of similarity and rule-based strategies in judgment

    PubMed Central

    von Helversen, Bettina; Karlsson, Linnea; Rasch, Björn; Rieskamp, Jörg

    2014-01-01

    Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI), we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved. PMID:25360099

  16. A Rational Analysis of Rule-Based Concept Learning

    ERIC Educational Resources Information Center

    Goodman, Noah D.; Tenenbaum, Joshua B.; Feldman, Jacob; Griffiths, Thomas L.

    2008-01-01

    This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a concept language of logical rules. This article compares the model predictions to human generalization judgments in several…

  17. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  18. A Rule-Based System Implementing a Method for Translating FOL Formulas into NL Sentences

    NASA Astrophysics Data System (ADS)

    Mpagouli, Aikaterini; Hatzilygeroudis, Ioannis

    In this paper, we mainly present the implementation of a system that translates first order logic (FOL) formulas into natural language (NL) sentences. The motivation comes from an intelligent tutoring system teaching logic as a knowledge representation language, where it is used as a means for feedback to the students-users. FOL to NL conversion is achieved by using a rule-based approach, where we exploit the pattern matching capabilities of rules. So, the system consists of rule-based modules corresponding to the phases of our translation methodology. Facts are used in a lexicon providing lexical and grammatical information that helps in producing the NL sentences. The whole system is implemented in Jess, a java-implemented rule-based programming tool. Experimental results confirm the success of our choices.

  19. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  20. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  1. Efficient Implementations of the Quadrature-Free Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Atkins, Harold L.

    1999-01-01

    The efficiency of the quadrature-free form of the dis- continuous Galerkin method in two dimensions, and briefly in three dimensions, is examined. Most of the work for constant-coefficient, linear problems involves the volume and edge integrations, and the transformation of information from the volume to the edges. These operations can be viewed as matrix-vector multiplications. Many of the matrices are sparse as a result of symmetry, and blocking and specialized multiplication routines are used to account for the sparsity. By optimizing these operations, a 35% reduction in total CPU time is achieved. For nonlinear problems, the calculation of the flux becomes dominant because of the cost associated with polynomial products and inversion. This component of the work can be reduced by up to 75% when the products are approximated by truncating terms. Because the cost is high for nonlinear problems on general elements, it is suggested that simplified physics and the most efficient element types be used over most of the domain.

  2. A hierarchical fuzzy rule-based approach to aphasia diagnosis.

    PubMed

    Akbarzadeh-T, Mohammad-R; Moshtagh-Khorasani, Majid

    2007-10-01

    Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy.

  3. Rule-based reasoning is fast and belief-based reasoning can be slow: Challenging current explanations of belief-bias and base-rate neglect.

    PubMed

    Newman, Ian R; Gibb, Maia; Thompson, Valerie A

    2017-07-01

    It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this hypothesis about the relative speed of belief-based and rule-based processes. Participants solved base-rate problems (Experiment 1) and conditional inferences (Experiment 2) under a challenging deadline; they then gave a second response in free time. We found that fast responses were informed by rules of probability and logical validity, and that slow responses incorporated belief-based information. Implications for Dual-Process theories and future research options for dissociating Type I and Type II processes are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Architecture For The Optimization Of A Machining Process In Real Time Through Rule-Based Expert System

    NASA Astrophysics Data System (ADS)

    Serrano, Rafael; González, Luis Carlos; Martín, Francisco Jesús

    2009-11-01

    Under the project SENSOR-IA which has had financial funding from the Order of Incentives to the Regional Technology Centers of the Counsil of Innovation, Science and Enterprise of Andalusia, an architecture for the optimization of a machining process in real time through rule-based expert system has been developed. The architecture consists of an acquisition system and sensor data processing engine (SATD) from an expert system (SE) rule-based which communicates with the SATD. The SE has been designed as an inference engine with an algorithm for effective action, using a modus ponens rule model of goal-oriented rules.The pilot test demonstrated that it is possible to govern in real time the machining process based on rules contained in a SE. The tests have been done with approximated rules. Future work includes an exhaustive collection of data with different tool materials and geometries in a database to extract more precise rules.

  5. Differential quadrature method of nonlinear bending of functionally graded beam

    NASA Astrophysics Data System (ADS)

    Gangnian, Xu; Liansheng, Ma; Wang, Youzhi; Quan, Yuan; Weijie, You

    2018-02-01

    Using the third-order shear deflection beam theory (TBT), nonlinear bending of functionally graded (FG) beams composed with various amounts of ceramic and metal is analyzed utilizing the differential quadrature method (DQM). The properties of beam material are supposed to accord with the power law index along to thickness. First, according to the principle of stationary potential energy, the partial differential control formulae of the FG beams subjected to a distributed lateral force are derived. To obtain numerical results of the nonlinear bending, non-dimensional boundary conditions and control formulae are dispersed by applying the DQM. To verify the present solution, several examples are analyzed for nonlinear bending of homogeneous beams with various edges. A minute parametric research is in progress about the effect of the law index, transverse shear deformation, distributed lateral force and boundary conditions.

  6. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode.

  7. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  8. Research on key technology of the verification system of steel rule based on vision measurement

    NASA Astrophysics Data System (ADS)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  9. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  10. ConGEMs: Condensed Gene Co-Expression Module Discovery Through Rule-Based Clustering and Its Application to Carcinogenesis.

    PubMed

    Mallik, Saurav; Zhao, Zhongming

    2017-12-28

    For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures-weighted rank-based Jaccard and Cosine measures-and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s) through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm-RANWAR-was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

  11. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  12. Quadrature transmit coil for breast imaging at 7 tesla using forced current excitation for improved homogeneity.

    PubMed

    McDougall, Mary Preston; Cheshkov, Sergey; Rispoli, Joseph; Malloy, Craig; Dimitrov, Ivan; Wright, Steven M

    2014-11-01

    To demonstrate the use of forced current excitation (FCE) to create homogeneous excitation of the breast at 7 tesla, insensitive to the effects of asymmetries in the electrical environment. FCE was implemented on two breast coils: one for quadrature (1) H imaging and one for proton-decoupled (13) C spectroscopy. Both were a Helmholtz-saddle combination, with the saddle tuned to 298 MHz for imaging and 75 MHz for spectroscopy. Bench measurements were acquired to demonstrate the ability to force equal currents on elements in the presence of asymmetric loading to improve homogeneity. Modeling and temperature measurements were conducted per safety protocol. B1 mapping, imaging, and proton-decoupled (13) C spectroscopy were demonstrated in vivo. Using FCE to ensure balanced currents on elements enabled straightforward tuning and maintaining of isolation between quadrature elements of the coil. Modeling and bench measurements confirmed homogeneity of the field, which resulted in images with excellent fat suppression and in broadband proton-decoupled carbon-13 spectra. FCE is a straightforward approach to ensure equal currents on multiple coil elements and a homogeneous excitation field, insensitive to the effects of asymmetries in the electrical environment. This enabled effective breast imaging and proton-decoupled carbon-13 spectroscopy at 7T. © 2014 Wiley Periodicals, Inc.

  13. Integrating the ECG power-line interference removal methods with rule-based system.

    PubMed

    Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N

    1995-01-01

    The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.

  14. The effect of multiple primary rules on population-based cancer survival

    PubMed Central

    Weir, Hannah K.; Johnson, Christopher J.; Thompson, Trevor D.

    2015-01-01

    Purpose Different rules for registering multiple primary (MP) cancers are used by cancer registries throughout the world, making international data comparisons difficult. This study evaluates the effect of Surveillance, Epidemiology, and End Results (SEER) and International Association of Cancer Registries (IACR) MP rules on population-based cancer survival estimates. Methods Data from five US states and six metropolitan area cancer registries participating in the SEER Program were used to estimate age-standardized relative survival (RS%) for first cancers-only and all first cancers matching the selection criteria according to SEER and IACR MP rules for all cancer sites combined and for the top 25 cancer site groups among men and women. Results During 1995–2008, the percentage of MP cancers (all sites, both sexes) increased 25.4 % by using SEER rules (from 14.6 to 18.4 %) and 20.1 % by using IACR rules (from 13.2 to 15.8 %). More MP cancers were registered among females than among males, and SEER rules registered more MP cancers than IACR rules (15.8 vs. 14.4 % among males; 17.2 vs. 14.5 % among females). The top 3 cancer sites with the largest differences were melanoma (5.8 %), urinary bladder (3.5 %), and kidney and renal pelvis (2.9 %) among males, and breast (5.9 %), melanoma (3.9 %), and urinary bladder (3.4 %) among females. Five-year survival estimates (all sites combined) restricted to first primary cancers-only were higher than estimates by using first site-specific primaries (SEER or IACR rules), and for 11 of 21 sites among males and 11 of 23 sites among females. SEER estimates are comparable to IACR estimates for all site-specific cancers and marginally higher for all sites combined among females (RS 62.28 vs. 61.96 %). Conclusion Survival after diagnosis has improved for many leading cancers. However, cancer patients remain at risk of subsequent cancers. Survival estimates based on first cancers-only exclude a large and increasing number of MP

  15. Noise tolerance in optical waveguide circuits for recognition of optical 16 quadrature amplitude modulation codes

    NASA Astrophysics Data System (ADS)

    Inoshita, Kensuke; Hama, Yoshimitsu; Kishikawa, Hiroki; Goto, Nobuo

    2016-12-01

    In photonic label routers, various optical signal processing functions are required; these include optical label extraction, recognition of the label, optical switching and buffering controlled by signals based on the label information and network routing tables, and label rewriting. Among these functions, we focus on photonic label recognition. We have proposed two kinds of optical waveguide circuits to recognize 16 quadrature amplitude modulation codes, i.e., recognition from the minimum output port and from the maximum output port. The recognition function was theoretically analyzed and numerically simulated by finite-difference beam-propagation method. We discuss noise tolerance in the circuit and show numerically simulated results to evaluate bit-error-rate (BER) characteristics against optical signal-to-noise ratio (OSNR). The OSNR required to obtain a BER less than 1.0×10-3 for the symbol rate of 2.5 GBaud was 14.5 and 27.0 dB for recognition from the minimum and maximum output, respectively.

  16. SOHO-Ulysses Coordinated Studies During the Two Extended Quadratures and the Radial Alignment of 2007-2008

    NASA Technical Reports Server (NTRS)

    Suess, S. T.; Poletto, G.

    2007-01-01

    During quadrature, plasma seen on the limb of the Sun, along the radi al direction to Ulysses, by SOHO or STEREO can be sampled in situ as lt later passes Ulysses. A figure shows a coronagraph image, the rad ial towards Ulysses at 58 deg. S. and the SOHO/UVCS slit positions d uring one set of observations. A CME subsequently occurred and passed Ulysses (at 3/4 AU) 15 days later.

  17. Do Americans Have a Preference for Rule-Based Classification?

    ERIC Educational Resources Information Center

    Murphy, Gregory L.; Bosch, David A.; Kim, ShinWoo

    2017-01-01

    Six experiments investigated variables predicted to influence subjects' tendency to classify items by a single property ("rule-based" responding) instead of overall similarity, following the paradigm of Norenzayan et al. (2002, "Cognitive Science"), who found that European Americans tended to give more "logical"…

  18. Comparison of conventional rule based flow control with control processes based on fuzzy logic in a combined sewer system.

    PubMed

    Klepiszewski, K; Schmitt, T G

    2002-01-01

    While conventional rule based, real time flow control of sewer systems is in common use, control systems based on fuzzy logic have been used only rarely, but successfully. The intention of this study is to compare a conventional rule based control of a combined sewer system with a fuzzy logic control by using hydrodynamic simulation. The objective of both control strategies is to reduce the combined sewer overflow volume by an optimization of the utilized storage capacities of four combined sewer overflow tanks. The control systems affect the outflow of four combined sewer overflow tanks depending on the water levels inside the structures. Both systems use an identical rule base. The developed control systems are tested and optimized for a single storm event which affects heterogeneously hydraulic load conditions and local discharge. Finally the efficiencies of the two different control systems are compared for two more storm events. The results indicate that the conventional rule based control and the fuzzy control similarly reach the objective of the control strategy. In spite of the higher expense to design the fuzzy control system its use provides no advantages in this case.

  19. A forecast-based STDP rule suitable for neuromorphic implementation.

    PubMed

    Davies, S; Galluppi, F; Rast, A D; Furber, S B

    2012-08-01

    Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that

  20. Pillars of judgment: how memory abilities affect performance in rule-based and exemplar-based judgments.

    PubMed

    Hoffmann, Janina A; von Helversen, Bettina; Rieskamp, Jörg

    2014-12-01

    Making accurate judgments is an essential skill in everyday life. Although how different memory abilities relate to categorization and judgment processes has been hotly debated, the question is far from resolved. We contribute to the solution by investigating how individual differences in memory abilities affect judgment performance in 2 tasks that induced rule-based or exemplar-based judgment strategies. In a study with 279 participants, we investigated how working memory and episodic memory affect judgment accuracy and strategy use. As predicted, participants switched strategies between tasks. Furthermore, structural equation modeling showed that the ability to solve rule-based tasks was predicted by working memory, whereas episodic memory predicted judgment accuracy in the exemplar-based task. Last, the probability of choosing an exemplar-based strategy was related to better episodic memory, but strategy selection was unrelated to working memory capacity. In sum, our results suggest that different memory abilities are essential for successfully adopting different judgment strategies. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Hierarchical graphs for better annotations of rule-based models of biochemical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Bin; Hlavacek, William

    2009-01-01

    In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of amore » molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.« less

  2. Rule groupings in expert systems using nearest neighbour decision rules, and convex hulls

    NASA Technical Reports Server (NTRS)

    Anastasiadis, Stergios

    1991-01-01

    Expert System shells are lacking in many areas of software engineering. Large rule based systems are not semantically comprehensible, difficult to debug, and impossible to modify or validate. Partitioning a set of rules found in CLIPS (C Language Integrated Production System) into groups of rules which reflect the underlying semantic subdomains of the problem, will address adequately the concerns stated above. Techniques are introduced to structure a CLIPS rule base into groups of rules that inherently have common semantic information. The concepts involved are imported from the field of A.I., Pattern Recognition, and Statistical Inference. Techniques focus on the areas of feature selection, classification, and a criteria of how 'good' the classification technique is, based on Bayesian Decision Theory. A variety of distance metrics are discussed for measuring the 'closeness' of CLIPS rules and various Nearest Neighbor classification algorithms are described based on the above metric.

  3. Two-wavelength quadrature multipoint detection of partial discharge in power transformers using fiber Fabry-Perot acoustic sensors

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Han, Ming; Wang, Anbo

    2012-06-01

    A reliable and low-cost two-wavelength quadrature interrogating method has been developed to demodulate optical signals from diaphragm-based Fabry-Perot interferometric fiber optic sensors for multipoint partial discharge detection in power transformers. Commercial available fused-silica parts (a wafer, a fiber ferrule, and a mating sleeve) and a cleaved optical single mode fiber were bonded together to form an extrinsic Fabry-Perot acoustic sensor. Two lasers with center wavelengths separated by a quarter of the period of sensor interference fringes were used to probe acousticwave- induced diaphragm vibration. A coarse wavelength-division multiplexing (CWDM) add/drop multiplexer was used to separate the reflected two wavelengths before two photo detectors. Optical couplers were used to distribute mixed laser light to each sensor-detector module for multiplexing purpose. Sensor structure, detection system design and experiment results are presented.

  4. A rule-based expert system for chemical prioritization using effects-based chemical categories

    EPA Science Inventory

    A rule-based expert system (ES) was developed to predict chemical binding to the estrogen receptor (ER) patterned on the research approaches championed by Gilman Veith to whom this article and journal issue are dedicated. The ERES was built to be mechanistically-transparent and m...

  5. Common-Sense Rule Inference

    NASA Astrophysics Data System (ADS)

    Lombardi, Ilaria; Console, Luca

    In the paper we show how rule-based inference can be made more flexible by exploiting semantic information associated with the concepts involved in the rules. We introduce flexible forms of common sense reasoning in which whenever no rule applies to a given situation, the inference engine can fire rules that apply to more general or to similar situations. This can be obtained by defining new forms of match between rules and the facts in the working memory and new forms of conflict resolution. We claim that in this way we can overcome some of the brittleness problems that are common in rule-based systems.

  6. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    PubMed Central

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  7. A Note on Multigrid Theory for Non-nested Grids and/or Quadrature

    NASA Technical Reports Server (NTRS)

    Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.

    1996-01-01

    We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.

  8. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  9. A high-level language for rule-based modelling.

    PubMed

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.

  10. A High-Level Language for Rule-Based Modelling

    PubMed Central

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D.

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208

  11. Cell Phones: Rule-Setting, Rule-Breaking, and Relationships in Classrooms

    ERIC Educational Resources Information Center

    Charles, Anita S.

    2012-01-01

    Based on a small qualitative study, this article focuses on understanding the rules for cell phones and other social networking media in schools, an aspect of broader research that led to important understandings of teacher-student negotiations. It considers the rules that schools and teachers make, the rampant breaking of these rules, the…

  12. On the effects of adaptive reservoir operating rules in hydrological physically-based models

    NASA Astrophysics Data System (ADS)

    Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo

    2017-04-01

    Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling

  13. Rule acquisition in formal decision contexts based on formal, object-oriented and property-oriented concept lattices.

    PubMed

    Ren, Yue; Li, Jinhai; Aswani Kumar, Cherukuri; Liu, Wenqi

    2014-01-01

    Rule acquisition is one of the main purposes in the analysis of formal decision contexts. Up to now, there have been several types of rules in formal decision contexts such as decision rules, decision implications, and granular rules, which can be viewed as ∧-rules since all of them have the following form: "if conditions 1,2,…, and m hold, then decisions hold." In order to enrich the existing rule acquisition theory in formal decision contexts, this study puts forward two new types of rules which are called ∨-rules and ∨-∧ mixed rules based on formal, object-oriented, and property-oriented concept lattices. Moreover, a comparison of ∨-rules, ∨-∧ mixed rules, and ∧-rules is made from the perspectives of inclusion and inference relationships. Finally, some real examples and numerical experiments are conducted to compare the proposed rule acquisition algorithms with the existing one in terms of the running efficiency.

  14. Rule Acquisition in Formal Decision Contexts Based on Formal, Object-Oriented and Property-Oriented Concept Lattices

    PubMed Central

    Ren, Yue; Aswani Kumar, Cherukuri; Liu, Wenqi

    2014-01-01

    Rule acquisition is one of the main purposes in the analysis of formal decision contexts. Up to now, there have been several types of rules in formal decision contexts such as decision rules, decision implications, and granular rules, which can be viewed as ∧-rules since all of them have the following form: “if conditions 1,2,…, and m hold, then decisions hold.” In order to enrich the existing rule acquisition theory in formal decision contexts, this study puts forward two new types of rules which are called ∨-rules and ∨-∧ mixed rules based on formal, object-oriented, and property-oriented concept lattices. Moreover, a comparison of ∨-rules, ∨-∧ mixed rules, and ∧-rules is made from the perspectives of inclusion and inference relationships. Finally, some real examples and numerical experiments are conducted to compare the proposed rule acquisition algorithms with the existing one in terms of the running efficiency. PMID:25165744

  15. Discontinuous categories affect information-integration but not rule-based category learning.

    PubMed

    Maddox, W Todd; Filoteo, J Vincent; Lauritzen, J Scott; Connally, Emily; Hejl, Kelli D

    2005-07-01

    Three experiments were conducted that provide a direct examination of within-category discontinuity manipulations on the implicit, procedural-based learning and the explicit, hypothesis-testing systems proposed in F. G. Ashby, L. A. Alfonso-Reese, A. U. Turken, and E. M. Waldron's (1998) competition between verbal and implicit systems model. Discontinuous categories adversely affected information-integration but not rule-based category learning. Increasing the magnitude of the discontinuity did not lead to a significant decline in performance. The distance to the bound provides a reasonable description of the generalization profile associated with the hypothesis-testing system, whereas the distance to the bound plus the distance to the trained response region provides a reasonable description of the generalization profile associated with the procedural-based learning system. These results suggest that within-category discontinuity differentially impacts information-integration but not rule-based category learning and provides information regarding the detailed processing characteristics of each category learning system. ((c) 2005 APA, all rights reserved).

  16. High-resolution, anthropomorphic, computational breast phantom: fusion of rule-based structures with patient-based anatomy

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Gong, Xiaolin; Graff, Christian G.; Santana, Maira; Sturgeon, Gregory M.; Sauer, Thomas J.; Zeng, Rongping; Glick, Stephen J.; Lo, Joseph Y.

    2017-03-01

    While patient-based breast phantoms are realistic, they are limited by low resolution due to the image acquisition and segmentation process. The purpose of this study is to restore the high frequency components for the patient-based phantoms by adding power law noise (PLN) and breast structures generated based on mathematical models. First, 3D radial symmetric PLN with β=3 was added at the boundary between adipose and glandular tissue to connect broken tissue and create a high frequency contour of the glandular tissue. Next, selected high-frequency features from the FDA rule-based computational phantom (Cooper's ligaments, ductal network, and blood vessels) were fused into the phantom. The effects of enhancement in this study were demonstrated by 2D mammography projections and digital breast tomosynthesis (DBT) reconstruction volumes. The addition of PLN and rule-based models leads to a continuous decrease in β. The new β is 2.76, which is similar to what typically found for reconstructed DBT volumes. The new combined breast phantoms retain the realism from segmentation and gain higher resolution after restoration.

  17. Proving Properties of Rule-Based Systems

    DTIC Science & Technology

    1990-12-01

    in these systems and enable us to use them with more confidence. Each system of rules is encoded as a set of axioms that define the system theory . The...operation of the rule language and information about the subject domain are also described in the system theory . Validation tasks, such as...the validity of the conjecture in the system theory , we have carried out the corresponding validation task. If the proof is restricted to be

  18. Integration of object-oriented knowledge representation with the CLIPS rule based system

    NASA Technical Reports Server (NTRS)

    Logie, David S.; Kamil, Hasan

    1990-01-01

    The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

  19. A rule-based, dose-finding design for use in stroke rehabilitation research: methodological development.

    PubMed

    Colucci, E; Clark, A; Lang, C E; Pomeroy, V M

    2017-12-01

    Dose-optimisation studies as precursors to clinical trials are rare in stroke rehabilitation. To develop a rule-based, dose-finding design for stroke rehabilitation research. 3+3 rule-based, dose-finding study. Dose escalation/de-escalation was undertaken according to preset rules and a mathematical sequence (modified Fibonacci sequence). The target starting daily dose was 50 repetitions for the first cohort. Adherence was recorded by an electronic counter. At the end of the 2-week training period, the adherence record indicated dose tolerability (adherence to target dose) and the outcome measure indicated dose benefit (10% increase in motor function). The preset increment/decrease and checking rules were then applied to set the dose for the subsequent cohort. The process was repeated until preset stopping rules were met. Participants had a mean age of 68 (range 48 to 81) years, and were a mean of 70 (range 9 to 289) months post stroke with moderate upper limb paresis. A custom-built model of exercise-based training to enhance ability to open the paretic hand. Repetitions per minute of extension/flexion of paretic digits against resistance. Usability of the preset rules and whether the maximally tolerated dose was identifiable. Five cohorts of three participants were involved. Discernibly different doses were set for each subsequent cohort (i.e. 50, 100, 167, 251 and 209 repetitions/day). The maximally tolerated dose for the model training task was 209 repetitions/day. This dose-finding design is a feasible method for use in stroke rehabilitation research. Copyright © 2017 Chartered Society of Physiotherapy. All rights reserved.

  20. New QCD sum rules based on canonical commutation relations

    NASA Astrophysics Data System (ADS)

    Hayata, Tomoya

    2012-04-01

    New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.

  1. A multilayer perceptron solution to the match phase problem in rule-based artificial intelligence systems

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Passino, Kevin M.; Antsaklis, Panos J.

    1992-01-01

    In rule-based AI planning, expert, and learning systems, it is often the case that the left-hand-sides of the rules must be repeatedly compared to the contents of some 'working memory'. The traditional approach to solve such a 'match phase problem' for production systems is to use the Rete Match Algorithm. Here, a new technique using a multilayer perceptron, a particular artificial neural network model, is presented to solve the match phase problem for rule-based AI systems. A syntax for premise formulas (i.e., the left-hand-sides of the rules) is defined, and working memory is specified. From this, it is shown how to construct a multilayer perceptron that finds all of the rules which can be executed for the current situation in working memory. The complexity of the constructed multilayer perceptron is derived in terms of the maximum number of nodes and the required number of layers. A method for reducing the number of layers to at most three is also presented.

  2. Efficiency of reactant site sampling in network-free simulation of rule-based models for biochemical systems

    PubMed Central

    Yang, Jin; Hlavacek, William S.

    2011-01-01

    Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such

  3. Automatic de-identification of French clinical records: comparison of rule-based and machine-learning approaches.

    PubMed

    Grouin, Cyril; Zweigenbaum, Pierre

    2013-01-01

    In this paper, we present a comparison of two approaches to automatically de-identify medical records written in French: a rule-based system and a machine-learning based system using a conditional random fields (CRF) formalism. Both systems have been designed to process nine identifiers in a corpus of medical records in cardiology. We performed two evaluations: first, on 62 documents in cardiology, and on 10 documents in foetopathology - produced by optical character recognition (OCR) - to evaluate the robustness of our systems. We achieved a 0.843 (rule-based) and 0.883 (machine-learning) exact match overall F-measure in cardiology. While the rule-based system allowed us to achieve good results on nominative (first and last names) and numerical data (dates, phone numbers, and zip codes), the machine-learning approach performed best on more complex categories (postal addresses, hospital names, medical devices, and towns). On the foetopathology corpus, although our systems have not been designed for this corpus and despite OCR character recognition errors, we obtained promising results: a 0.681 (rule-based) and 0.638 (machine-learning) exact-match overall F-measure. This demonstrates that existing tools can be applied to process new documents of lower quality.

  4. Building distributed rule-based systems using the AI Bus

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain C.

    1990-01-01

    The AI Bus software architecture was designed to support the construction of large-scale, production-quality applications in areas of high technology flux, running heterogeneous distributed environments, utilizing a mix of knowledge-based and conventional components. These goals led to its current development as a layered, object-oriented library for cooperative systems. This paper describes the concepts and design of the AI Bus and its implementation status as a library of reusable and customizable objects, structured by layers from operating system interfaces up to high-level knowledge-based agents. Each agent is a semi-autonomous process with specialized expertise, and consists of a number of knowledge sources (a knowledge base and inference engine). Inter-agent communication mechanisms are based on blackboards and Actors-style acquaintances. As a conservative first implementation, we used C++ on top of Unix, and wrapped an embedded Clips with methods for the knowledge source class. This involved designing standard protocols for communication and functions which use these protocols in rules. Embedding several CLIPS objects within a single process was an unexpected problem because of global variables, whose solution involved constructing and recompiling a C++ version of CLIPS. We are currently working on a more radical approach to incorporating CLIPS, by separating out its pattern matcher, rule and fact representations and other components as true object oriented modules.

  5. Optical Generation of Fuzzy-Based Rules

    NASA Astrophysics Data System (ADS)

    Gur, Eran; Mendlovic, David; Zalevsky, Zeev

    2002-08-01

    In the last third of the 20th century, fuzzy logic has risen from a mathematical concept to an applicable approach in soft computing. Today, fuzzy logic is used in control systems for various applications, such as washing machines, train-brake systems, automobile automatic gear, and so forth. The approach of optical implementation of fuzzy inferencing was given by the authors in previous papers, giving an extra emphasis to applications with two dominant inputs. In this paper the authors introduce a real-time optical rule generator for the dual-input fuzzy-inference engine. The paper briefly goes over the dual-input optical implementation of fuzzy-logic inferencing. Then, the concept of constructing a set of rules from given data is discussed. Next, the authors show ways to implement this procedure optically. The discussion is accompanied by an example that illustrates the transformation from raw data into fuzzy set rules.

  6. A Simple Algorithm for Obtaining Nearly Optimal Quadrature Rules for NURBS-based Isogeometric Analysis

    DTIC Science & Technology

    2012-01-05

    Università degli Studi di Pavia bIstituto di Matematica Applicata e Tecnologie Informatiche “E. Magenes” del CNR, Pavia cDAEIMI, Università degli Studi di...Cassino d Institute for Computational Engineering and Sciences, University of Texas at Austin eDipartimento di Matematica , Università degli Studi di

  7. Compensatory processing during rule-based category learning in older adults.

    PubMed

    Bharani, Krishna L; Paller, Ken A; Reber, Paul J; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G

    2016-01-01

    Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex.

  8. Compensatory Processing During Rule-Based Category Learning in Older Adults

    PubMed Central

    Bharani, Krishna L.; Paller, Ken A.; Reber, Paul J.; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G.

    2016-01-01

    Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522

  9. The Interactive Effects of the Availability of Objectives and/or Rules on Computer-Based Learning: A Replication.

    ERIC Educational Resources Information Center

    Merrill, Paul F.; And Others

    To replicate and extend the results of a previous study, this project investigated the effects of behavioral objectives and/or rules on computer-based learning task performance. The 133 subjects were randomly assigned to an example-only, objective-example, rule example, or objective-rule example group. The availability of rules and/or objectives…

  10. Characterizing Rule-Based Category Learning Deficits in Patients with Parkinson's Disease

    ERIC Educational Resources Information Center

    Filoteo, J. Vincent; Maddox, W. Todd; Ing, A. David; Song, David D.

    2007-01-01

    Parkinson's disease (PD) patients and normal controls were tested in three category learning experiments to determine if previously observed rule-based category learning impairments in PD patients were due to deficits in selective attention or working memory. In Experiment 1, optimal categorization required participants to base their decision on a…

  11. Estimating Classification Accuracy for Complex Decision Rules Based on Multiple Scores

    ERIC Educational Resources Information Center

    Douglas, Karen M.; Mislevy, Robert J.

    2010-01-01

    Important decisions about students are made by combining multiple measures using complex decision rules. Although methods for characterizing the accuracy of decisions based on a single measure have been suggested by numerous researchers, such methods are not useful for estimating the accuracy of decisions based on multiple measures. This study…

  12. Multiresolution molecular mechanics: Surface effects in nanoscale materials

    NASA Astrophysics Data System (ADS)

    Yang, Qingcheng; To, Albert C.

    2017-05-01

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), [57]) is applied to capture surface effect for nanosized structures by designing a surface summation rule SRS within the framework of MMM. Combined with previously proposed bulk summation rule SRB, the MMM summation rule SRMMM is completed. SRS and SRB are consistently formed within SRMMM for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SRMMM lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SRS and SRB are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SRMMM accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SRMMM with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SRMMM that is analogous to numerical integration error with quadrature rule in FEM is very small.

  13. Instrument Reflections and Scene Amplitude Modulation in a Polychromatic Microwave Quadrature Interferometer

    NASA Technical Reports Server (NTRS)

    Dobson, Chris C.; Jones, Jonathan E.; Chavers, Greg

    2003-01-01

    A polychromatic microwave quadrature interferometer has been characterized using several laboratory plasmas. Reflections between the transmitter and the receiver have been observed, and the effects of including reflection terms in the data reduction equation have been examined. An error analysis which includes the reflections, modulation of the scene beam amplitude by the plasma, and simultaneous measurements at two frequencies has been applied to the empirical database, and the results are summarized. For reflection amplitudes around 1096, the reflection terms were found to reduce the calculated error bars for electron density measurements by about a factor of 2. The impact of amplitude modulation is also quantified. In the complete analysis, the mean error bar for high- density measurements is 7.596, and the mean phase shift error for low-density measurements is 1.2". .

  14. A novel methodology for building robust design rules by using design based metrology (DBM)

    NASA Astrophysics Data System (ADS)

    Lee, Myeongdong; Choi, Seiryung; Choi, Jinwoo; Kim, Jeahyun; Sung, Hyunju; Yeo, Hyunyoung; Shim, Myoungseob; Jin, Gyoyoung; Chung, Eunseung; Roh, Yonghan

    2013-03-01

    This paper addresses a methodology for building robust design rules by using design based metrology (DBM). Conventional method for building design rules has been using a simulation tool and a simple pattern spider mask. At the early stage of the device, the estimation of simulation tool is poor. And the evaluation of the simple pattern spider mask is rather subjective because it depends on the experiential judgment of an engineer. In this work, we designed a huge number of pattern situations including various 1D and 2D design structures. In order to overcome the difficulties of inspecting many types of patterns, we introduced Design Based Metrology (DBM) of Nano Geometry Research, Inc. And those mass patterns could be inspected at a fast speed with DBM. We also carried out quantitative analysis on PWQ silicon data to estimate process variability. Our methodology demonstrates high speed and accuracy for building design rules. All of test patterns were inspected within a few hours. Mass silicon data were handled with not personal decision but statistical processing. From the results, robust design rules are successfully verified and extracted. Finally we found out that our methodology is appropriate for building robust design rules.

  15. Primary motor cortex contributes to the implementation of implicit value-based rules during motor decisions.

    PubMed

    Derosiere, Gerard; Zénon, Alexandre; Alamia, Andrea; Duque, Julie

    2017-02-01

    In the present study, we investigated the functional contribution of the human primary motor cortex (M1) to motor decisions. Continuous theta burst stimulation (cTBS) was used to alter M1 activity while participants performed a decision-making task in which the reward associated with the subjects' responses (right hand finger movements) depended on explicit and implicit value-based rules. Subjects performed the task over two consecutive days and cTBS occurred in the middle of Day 2, once the subjects were just about to implement implicit rules, in addition to the explicit instructions, to choose their responses, as evident in the control group (cTBS over the right somatosensory cortex). Interestingly, cTBS over the left M1 prevented subjects from implementing the implicit value-based rule while its implementation was enhanced in the group receiving cTBS over the right M1. Hence, cTBS had opposite effects depending on whether it was applied on the contralateral or ipsilateral M1. The use of the explicit value-based rule was unaffected by cTBS in the three groups of subject. Overall, the present study provides evidence for a functional contribution of M1 to the implementation of freshly acquired implicit rules, possibly through its involvement in a cortico-subcortical network controlling value-based motor decisions. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Achromatic registration of quadrature components of the optical spectrum in spectral domain optical coherence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shilyagin, P A; Gelikonov, G V; Gelikonov, V M

    2014-07-31

    We have thoroughly investigated the method of simultaneous reception of spectral components with the achromatised quadrature phase shift between two portions of a reference wave, designed for the effective suppression of the 'mirror' artefact in the resulting image obtained by means of spectral domain optical coherence tomography (SD OCT). We have developed and experimentally tested a phase-shifting element consisting of a beam divider, which splits the reference optical beam into the two beams, and of delay lines being individual for each beam, which create a mutual phase difference of π/2 in the double pass of the reference beam. The phasemore » shift achromatism over a wide spectral range is achieved by using in the delay lines the individual elements with different dispersion characteristics. The ranges of admissible adjustment parameters of the achromatised delay line are estimated for exact and inexact conformity of the geometric characteristics of its components to those calculated. A possibility of simultaneous recording of the close-to-quadrature spectral components with a single linear photodetector element is experimentally confirmed. The suppression of the artefact mirror peak in the OCT-signal by an additional 9 dB relative to the level of its suppression is experimentally achieved when the air delay line is used. Two-dimensional images of the surface positioned at an angle to the axis of the probe beam are obtained with the correction of the 'mirror' artefact while maintaining the dynamic range of the image. (laser biophotonics)« less

  17. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  18. Attribute Index and Uniform Design Based Multiobjective Association Rule Mining with Evolutionary Algorithm

    PubMed Central

    Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683

  19. A Lagrange-type projector on the real line

    NASA Astrophysics Data System (ADS)

    Mastroianni, G.; Notarangelo, I.

    2010-01-01

    We introduce an interpolation process based on some of the zeros of the m th generalized Freud polynomial. Convergence results and error estimates are given. In particular we show that, in some important function spaces, the interpolating polynomial behaves like the best approximation. Moreover the stability and the convergence of some quadrature rules are proved.

  20. Layout optimization with assist features placement by model based rule tables for 2x node random contact

    NASA Astrophysics Data System (ADS)

    Jun, Jinhyuck; Park, Minwoo; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Do, Munhoe; Lee, Dongchan; Kim, Taehoon; Choi, Junghoe; Luk-Pat, Gerard; Miloslavsky, Alex

    2015-03-01

    As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory and logic circuits, sub-resolution assist feature (SRAF) placement requirements become increasingly severe. Therefore device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature (MF) patterning capability. There are known-well several methods to generate SRAF such as Rule based Assist Features (RBAF), Model Based Assist Features (MBAF) and Hybrid Assisted Features combining features of the different algorithms using both RBAF and MBAF. Rule Based Assist Features (RBAF) continue to be deployed, even with the availability of Model Based Assist Features (MBAF) and Inverse Lithography Technology (ILT). Certainly for the 3x nm node, and even at the 2x nm nodes and lower, RBAF is used because it demands less run time and provides better consistency. Since RBAF is needed now and in the future, what is also needed is a faster method to create the AF rule tables. The current method typically involves making masks and printing wafers that contain several experiments, varying the main feature configurations, AF configurations, dose conditions, and defocus conditions - this is a time consuming and expensive process. In addition, as the technology node shrinks, wafer process changes and source shape redesigns occur more frequently, escalating the cost of rule table creation. Furthermore, as the demand on process margin escalates, there is a greater need for multiple rule tables: each tailored to a specific set of main-feature configurations. Model Assisted Rule Tables(MART) creates a set of test patterns, and evaluates the simulated CD at nominal conditions, defocused conditions and off-dose conditions. It also uses lithographic simulation to evaluate the likelihood of AF printing. It then analyzes the simulation data to automatically create AF rule tables. It means that analysis results display the cost of

  1. An expert system design to diagnose cancer by using a new method reduced rule base.

    PubMed

    Başçiftçi, Fatih; Avuçlu, Emre

    2018-04-01

    A Medical Expert System (MES) was developed which uses Reduced Rule Base to diagnose cancer risk according to the symptoms in an individual. A total of 13 symptoms were used. With the new MES, the reduced rules are controlled instead of all possibilities (2 13 = 8192 different possibilities occur). By controlling reduced rules, results are found more quickly. The method of two-level simplification of Boolean functions was used to obtain Reduced Rule Base. Thanks to the developed application with the number of dynamic inputs and outputs on different platforms, anyone can easily test their own cancer easily. More accurate results were obtained considering all the possibilities related to cancer. Thirteen different risk factors were determined to determine the type of cancer. The truth table produced in our study has 13 inputs and 4 outputs. The Boolean Function Minimization method is used to obtain less situations by simplifying logical functions. Diagnosis of cancer quickly thanks to control of the simplified 4 output functions. Diagnosis made with the 4 output values obtained using Reduced Rule Base was found to be quicker than diagnosis made by screening all 2 13 = 8192 possibilities. With the improved MES, more probabilities were added to the process and more accurate diagnostic results were obtained. As a result of the simplification process in breast and renal cancer diagnosis 100% diagnosis speed gain, in cervical cancer and lung cancer diagnosis rate gain of 99% was obtained. With Boolean function minimization, less number of rules is evaluated instead of evaluating a large number of rules. Reducing the number of rules allows the designed system to work more efficiently and to save time, and facilitates to transfer the rules to the designed Expert systems. Interfaces were developed in different software platforms to enable users to test the accuracy of the application. Any one is able to diagnose the cancer itself using determinative risk factors. Thereby

  2. Simple methods of exploiting the underlying structure of rule-based systems

    NASA Technical Reports Server (NTRS)

    Hendler, James

    1986-01-01

    Much recent work in the field of expert systems research has aimed at exploiting the underlying structures of the rule base for reasons of analysis. Such techniques as Petri-nets and GAGs have been proposed as representational structures that will allow complete analysis. Much has been made of proving isomorphisms between the rule bases and the mechanisms, and in examining the theoretical power of this analysis. In this paper we describe some early work in a new system which has much simpler (and thus, one hopes, more easily achieved) aims and less formality. The technique being examined is a very simple one: OPS5 programs are analyzed in a purely syntactic way and a FSA description is generated. In this paper we describe the technique and some user interface tools which exploit this structure.

  3. Stakeholders' Home and Community Based Services Settings Rule Knowledge

    ERIC Educational Resources Information Center

    Friedman, Carli

    2018-01-01

    Medicaid Home and Community Based Services (HCBS) waiver programs provide the majority of long-term services and supports for people with intellectual and developmental disabilities (IDD). Relatively new (2014) HCBS rules (CMS 2249-F/2296-F) governing these programs require "meaningful community" integration of people with disabilities…

  4. A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science Phenomena.

    NASA Astrophysics Data System (ADS)

    Prabhu, A.; Zednik, S.; Fox, P. A.; Ramachandran, R.; Maskey, M.; Shie, C. L.; Shen, S.

    2016-12-01

    Current Earth Science Information Systems lack support for new or interdisciplinary researchers, who may be unfamiliar with the domain vocabulary or the breadth of relevant data available. We need to evolve the current information systems, to reduce the time required for data preparation, processing and analysis. This can be done by effectively salvaging the "dark" resources in Earth Science. We assert that Earth science metadata assets are dark resources, information resources that organizations collect, process, and store for regular business or operational activities but fail to utilize for other purposes. In order to effectively use these dark resources, especially for data processing and visualization, we need a combination of domain, data product and processing knowledge, i.e. a knowledge base from which specific data operations can be performed. In this presentation, we describe a semantic, rules based approach to provide i.e. a service to visualize Earth Science phenomena, based on the data variables extracted using the "dark" metadata resources. We use Jena rules to make assertions about compatibility between a phenomena and various visualizations based on multiple factors. We created separate orthogonal rulesets to map each of these factors to the various phenomena. Some of the factors we have considered include measurements, spatial resolution and time intervals. This approach enables easy additions and deletions based on newly obtained domain knowledge or phenomena related information and thus improving the accuracy of the rules service overall.

  5. Prefrontal Contributions to Rule-Based and Information-Integration Category Learning

    ERIC Educational Resources Information Center

    Schnyer, David M.; Maddox, W. Todd; Ell, Shawn; Davis, Sarah; Pacheco, Jenni; Verfaellie, Mieke

    2009-01-01

    Previous research revealed that the basal ganglia play a critical role in category learning [Ell, S. W., Marchant, N. L., & Ivry, R. B. (2006). "Focal putamen lesions impair learning in rule-based, but not information-integration categorization tasks." "Neuropsychologia", 44(10), 1737-1751; Maddox, W. T. & Filoteo, J.…

  6. A rule of seven in Watson-Crick base-pairing of mismatched sequences.

    PubMed

    Cisse, Ibrahim I; Kim, Hajin; Ha, Taekjip

    2012-05-13

    Sequence recognition through base-pairing is essential for DNA repair and gene regulation, but the basic rules governing this process remain elusive. In particular, the kinetics of annealing between two imperfectly matched strands is not well characterized, despite its potential importance in nucleic acid-based biotechnologies and gene silencing. Here we use single-molecule fluorescence to visualize the multiple annealing and melting reactions of two untethered strands inside a porous vesicle, allowing us to precisely quantify the annealing and melting rates. The data as a function of mismatch position suggest that seven contiguous base pairs are needed for rapid annealing of DNA and RNA. This phenomenological rule of seven may underlie the requirement for seven nucleotides of complementarity to seed gene silencing by small noncoding RNA and may help guide performance improvement in DNA- and RNA-based bio- and nanotechnologies, in which off-target effects can be detrimental.

  7. Rule-based expert system for maritime anomaly detection

    NASA Astrophysics Data System (ADS)

    Roy, Jean

    2010-04-01

    Maritime domain operators/analysts have a mandate to be aware of all that is happening within their areas of responsibility. This mandate derives from the needs to defend sovereignty, protect infrastructures, counter terrorism, detect illegal activities, etc., and it has become more challenging in the past decade, as commercial shipping turned into a potential threat. In particular, a huge portion of the data and information made available to the operators/analysts is mundane, from maritime platforms going about normal, legitimate activities, and it is very challenging for them to detect and identify the non-mundane. To achieve such anomaly detection, they must establish numerous relevant situational facts from a variety of sensor data streams. Unfortunately, many of the facts of interest just cannot be observed; the operators/analysts thus use their knowledge of the maritime domain and their reasoning faculties to infer these facts. As they are often overwhelmed by the large amount of data and information, automated reasoning tools could be used to support them by inferring the necessary facts, ultimately providing indications and warning on a small number of anomalous events worthy of their attention. Along this line of thought, this paper describes a proof-of-concept prototype of a rule-based expert system implementing automated rule-based reasoning in support of maritime anomaly detection.

  8. The relevance of a rules-based maize marketing policy: an experimental case study of Zambia.

    PubMed

    Abbink, Klaus; Jayne, Thomas S; Moller, Lars C

    2011-01-01

    Strategic interaction between public and private actors is increasingly recognised as an important determinant of agricultural market performance in Africa and elsewhere. Trust and consultation tends to positively affect private activity while uncertainty of government behaviour impedes it. This paper reports on a laboratory experiment based on a stylised model of the Zambian maize market. The experiment facilitates a comparison between discretionary interventionism and a rules-based policy in which the government pre-commits itself to a future course of action. A simple precommitment rule can, in theory, overcome the prevailing strategic dilemma by encouraging private sector participation. Although this result is also borne out in the economic experiment, the improvement in private sector activity is surprisingly small and not statistically significant due to irrationally cautious choices by experimental governments. Encouragingly, a rules-based policy promotes a much more stable market outcome thereby substantially reducing the risk of severe food shortages. These results underscore the importance of predictable and transparent rules for the state's involvement in agricultural markets.

  9. An Investigation into the Application of Generalized Differential Quadrature Method to Bending Analysis of Composite Sandwich Plates

    NASA Astrophysics Data System (ADS)

    Ghassemi, Aazam; Yazdani, Mostafa; Hedayati, Mohamad

    2017-12-01

    In this work, based on the First Order Shear Deformation Theory (FSDT), an attempt is made to explore the applicability and accuracy of the Generalized Differential Quadrature Method (GDQM) for bending analysis of composite sandwich plates under static loading. Comparative studies of the bending behavior of composite sandwich plates are made between two types of boundary conditions for different cases. The effects of fiber orientation, ratio of thickness to length of the plate, the ratio of thickness of core to thickness of the face sheet are studied on the transverse displacement and moment resultants. As shown in this study, the role of the core thickness in deformation of these plates can be reversed by the stiffness of the core in comparison with sheets. The obtained graphs give very good results due to optimum design of sandwich plates. In Comparison with existing solutions, fast convergent rates and high accuracy results can be achieved by the GDQ method.

  10. Detection of pseudosinusoidal epileptic seizure segments in the neonatal EEG by cascading a rule-based algorithm with a neural network.

    PubMed

    Karayiannis, Nicolaos B; Mukherjee, Amit; Glover, John R; Ktonas, Periklis Y; Frost, James D; Hrachovy, Richard A; Mizrahi, Eli M

    2006-04-01

    This paper presents an approach to detect epileptic seizure segments in the neonatal electroencephalogram (EEG) by characterizing the spectral features of the EEG waveform using a rule-based algorithm cascaded with a neural network. A rule-based algorithm screens out short segments of pseudosinusoidal EEG patterns as epileptic based on features in the power spectrum. The output of the rule-based algorithm is used to train and compare the performance of conventional feedforward neural networks and quantum neural networks. The results indicate that the trained neural networks, cascaded with the rule-based algorithm, improved the performance of the rule-based algorithm acting by itself. The evaluation of the proposed cascaded scheme for the detection of pseudosinusoidal seizure segments reveals its potential as a building block of the automated seizure detection system under development.

  11. Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.

    PubMed

    Krishnamurthy, V; Krishnamurthy, E V

    1999-03-01

    A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.

  12. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  13. Implementation of a spike-based perceptron learning rule using TiO2-x memristors.

    PubMed

    Mostafa, Hesham; Khiat, Ali; Serb, Alexander; Mayr, Christian G; Indiveri, Giacomo; Prodromakis, Themis

    2015-01-01

    Synaptic plasticity plays a crucial role in allowing neural networks to learn and adapt to various input environments. Neuromorphic systems need to implement plastic synapses to obtain basic "cognitive" capabilities such as learning. One promising and scalable approach for implementing neuromorphic synapses is to use nano-scale memristors as synaptic elements. In this paper we propose a hybrid CMOS-memristor system comprising CMOS neurons interconnected through TiO2-x memristors, and spike-based learning circuits that modulate the conductance of the memristive synapse elements according to a spike-based Perceptron plasticity rule. We highlight a number of advantages for using this spike-based plasticity rule as compared to other forms of spike timing dependent plasticity (STDP) rules. We provide experimental proof-of-concept results with two silicon neurons connected through a memristive synapse that show how the CMOS plasticity circuits can induce stable changes in memristor conductances, giving rise to increased synaptic strength after a potentiation episode and to decreased strength after a depression episode.

  14. Analyzing Large Gene Expression and Methylation Data Profiles Using StatBicRM: Statistical Biclustering-Based Rule Mining

    PubMed Central

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data

  15. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    PubMed

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data

  16. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    NASA Astrophysics Data System (ADS)

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  17. Criterion learning in rule-based categorization: Simulation of neural mechanism and new data

    PubMed Central

    Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd

    2015-01-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349

  18. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    PubMed

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  20. Challenges for Rule Systems on the Web

    NASA Astrophysics Data System (ADS)

    Hu, Yuh-Jong; Yeh, Ching-Long; Laun, Wolfgang

    The RuleML Challenge started in 2007 with the objective of inspiring the issues of implementation for management, integration, interoperation and interchange of rules in an open distributed environment, such as the Web. Rules are usually classified as three types: deductive rules, normative rules, and reactive rules. The reactive rules are further classified as ECA rules and production rules. The study of combination rule and ontology is traced back to an earlier active rule system for relational and object-oriented (OO) databases. Recently, this issue has become one of the most important research problems in the Semantic Web. Once we consider a computer executable policy as a declarative set of rules and ontologies that guides the behavior of entities within a system, we have a flexible way to implement real world policies without rewriting the computer code, as we did before. Fortunately, we have de facto rule markup languages, such as RuleML or RIF to achieve the portability and interchange of rules for different rule systems. Otherwise, executing real-life rule-based applications on the Web is almost impossible. Several commercial or open source rule engines are available for the rule-based applications. However, we still need a standard rule language and benchmark for not only to compare the rule systems but also to measure the progress in the field. Finally, a number of real-life rule-based use cases will be investigated to demonstrate the applicability of current rule systems on the Web.

  1. Rule-based modeling and simulations of the inner kinetochore structure.

    PubMed

    Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar

    2013-09-01

    Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd

  2. DTFP-Growth: Dynamic Threshold-Based FP-Growth Rule Mining Algorithm Through Integrating Gene Expression, Methylation, and Protein-Protein Interaction Profiles.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan

    2018-04-01

    Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.

  3. Decision support system for triage management: A hybrid approach using rule-based reasoning and fuzzy logic.

    PubMed

    Dehghani Soufi, Mahsa; Samad-Soltani, Taha; Shams Vahdati, Samad; Rezaei-Hachesu, Peyman

    2018-06-01

    Fast and accurate patient triage for the response process is a critical first step in emergency situations. This process is often performed using a paper-based mode, which intensifies workload and difficulty, wastes time, and is at risk of human errors. This study aims to design and evaluate a decision support system (DSS) to determine the triage level. A combination of the Rule-Based Reasoning (RBR) and Fuzzy Logic Classifier (FLC) approaches were used to predict the triage level of patients according to the triage specialist's opinions and Emergency Severity Index (ESI) guidelines. RBR was applied for modeling the first to fourth decision points of the ESI algorithm. The data relating to vital signs were used as input variables and modeled using fuzzy logic. Narrative knowledge was converted to If-Then rules using XML. The extracted rules were then used to create the rule-based engine and predict the triage levels. Fourteen RBR and 27 fuzzy rules were extracted and used in the rule-based engine. The performance of the system was evaluated using three methods with real triage data. The accuracy of the clinical decision support systems (CDSSs; in the test data) was 99.44%. The evaluation of the error rate revealed that, when using the traditional method, 13.4% of the patients were miss-triaged, which is statically significant. The completeness of the documentation also improved from 76.72% to 98.5%. Designed system was effective in determining the triage level of patients and it proved helpful for nurses as they made decisions, generated nursing diagnoses based on triage guidelines. The hybrid approach can reduce triage misdiagnosis in a highly accurate manner and improve the triage outcomes. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Pulsed Traveling-wave Quadrature Squeezing Using Quasi-phase Matched Lithium Niobate Crystals

    NASA Astrophysics Data System (ADS)

    Chen, Chao-Hsiang

    Interests in generating higher quantum noise squeezing in order to develop methods to enhance optical measurement below the shot-noise limit in various applications has grown in recent years. The noise suppression from squeezing can improve the SNR in coherent optical systems when the returning signal power is weak, such as optical coherence tomography, LADAR, confocal microscopy and low-light coherent imaging. Unlike the generation of squeezing with a continuous wave, which is currently developed mainly for gravitational wave detection in LIGO project, the study of pulsed-traveling waves is focused on industrial, medical and other commercial interests. This dissertation presents the experimental results of pulsed traveling wave squeezing. The intention of the study is to explore the possibility of using quasi-phase matched crystals to generate the highest possible degree of quadrature squeezing. In order to achieve this goal, efforts to test the various effects from spatial Gaussian modes and relative beam waist placement for the second-harmonic pump were carried out in order to further the understanding of limiting factors to pulsed traveling wave squeezing. 20mm and 30mm-long periodically poled lithium noibate (PPLN) crystals were used in the experiment to generate a squeezed vacuum state. A maximum of 4.2+/-0.2dB quadrature squeezing has been observed, and the measured anti-squeezing exceeds 20dB.The phase sensitive amplification (PSA) gain and de-gain performance were also measured to compare the results of measured squeezing. The PPLN crystals can produce high conversion efficiency of second-harmonic generation (SHG) without a cavity. When a long PPLN crystal is used in a squeezer, the beam propagation in the nonlinear medium does not follow the characteristics in thin crystals. Instead, it is operated under the long-crystal criteria, which the crystal length is multiple times longer than the Rayleigh range of the injected beam i n the crystals. Quasi

  5. Modeling of optical quadrature microscopy for imaging mouse embryos

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2008-02-01

    Optical quadrature microscopy (OQM) has been shown to provide the optical path difference through a mouse embryo, and has led to a novel method to count the total number of cells further into development than current non-toxic imaging techniques used in the clinic. The cell counting method has the potential to provide an additional quantitative viability marker for blastocyst transfer during in vitro fertilization. OQM uses a 633 nm laser within a modified Mach-Zehnder interferometer configuration to measure the amplitude and phase of the signal beam that travels through the embryo. Four cameras preceded by multiple beamsplitters record the four interferograms that are used within a reconstruction algorithm to produce an image of the complex electric field amplitude. Here we present a model for the electric field through the primary optical components in the imaging configuration and the reconstruction algorithm to calculate the signal to noise ratio when imaging mouse embryos. The model includes magnitude and phase errors in the individual reference and sample paths, fixed pattern noise, and noise within the laser and detectors. This analysis provides the foundation for determining the imaging limitations of OQM and the basis to optimize the cell counting method in order to introduce additional quantitative viability markers.

  6. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  7. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  8. TRICARE revision to CHAMPUS DRG-based payment system, pricing of hospital claims. Final rule.

    PubMed

    2014-05-21

    This Final rule changes TRICARE's current regulatory provision for inpatient hospital claims priced under the DRG-based payment system. Claims are currently priced by using the rates and weights that are in effect on a beneficiary's date of admission. This Final rule changes that provision to price such claims by using the rates and weights that are in effect on a beneficiary's date of discharge.

  9. Symbolic rule-based classification of lung cancer stages from free-text pathology reports.

    PubMed

    Nguyen, Anthony N; Lawley, Michael J; Hansen, David P; Bowman, Rayleen V; Clarke, Belinda E; Duhig, Edwina E; Colquist, Shoni

    2010-01-01

    To classify automatically lung tumor-node-metastases (TNM) cancer stages from free-text pathology reports using symbolic rule-based classification. By exploiting report substructure and the symbolic manipulation of systematized nomenclature of medicine-clinical terms (SNOMED CT) concepts in reports, statements in free text can be evaluated for relevance against factors relating to the staging guidelines. Post-coordinated SNOMED CT expressions based on templates were defined and populated by concepts in reports, and tested for subsumption by staging factors. The subsumption results were used to build logic according to the staging guidelines to calculate the TNM stage. The accuracy measure and confusion matrices were used to evaluate the TNM stages classified by the symbolic rule-based system. The system was evaluated against a database of multidisciplinary team staging decisions and a machine learning-based text classification system using support vector machines. Overall accuracy on a corpus of pathology reports for 718 lung cancer patients against a database of pathological TNM staging decisions were 72%, 78%, and 94% for T, N, and M staging, respectively. The system's performance was also comparable to support vector machine classification approaches. A system to classify lung TNM stages from free-text pathology reports was developed, and it was verified that the symbolic rule-based approach using SNOMED CT can be used for the extraction of key lung cancer characteristics from free-text reports. Future work will investigate the applicability of using the proposed methodology for extracting other cancer characteristics and types.

  10. A rule-based system for real-time analysis of control systems

    NASA Astrophysics Data System (ADS)

    Larson, Richard R.; Millard, D. Edward

    1992-10-01

    An approach to automate the real-time analysis of flight critical health monitoring and system status is being developed and evaluated at the NASA Dryden Flight Research Facility. A software package was developed in-house and installed as part of the extended aircraft interrogation and display system. This design features a knowledge-base structure in the form of rules to formulate interpretation and decision logic of real-time data. This technique has been applied for ground verification and validation testing and flight testing monitoring where quick, real-time, safety-of-flight decisions can be very critical. In many cases post processing and manual analysis of flight system data are not required. The processing is described of real-time data for analysis along with the output format which features a message stack display. The development, construction, and testing of the rule-driven knowledge base, along with an application using the X-31A flight test program, are presented.

  11. A rule-based system for real-time analysis of control systems

    NASA Technical Reports Server (NTRS)

    Larson, Richard R.; Millard, D. Edward

    1992-01-01

    An approach to automate the real-time analysis of flight critical health monitoring and system status is being developed and evaluated at the NASA Dryden Flight Research Facility. A software package was developed in-house and installed as part of the extended aircraft interrogation and display system. This design features a knowledge-base structure in the form of rules to formulate interpretation and decision logic of real-time data. This technique has been applied for ground verification and validation testing and flight testing monitoring where quick, real-time, safety-of-flight decisions can be very critical. In many cases post processing and manual analysis of flight system data are not required. The processing is described of real-time data for analysis along with the output format which features a message stack display. The development, construction, and testing of the rule-driven knowledge base, along with an application using the X-31A flight test program, are presented.

  12. Dependent Measure and Time Constraints Modulate the Competition between Conflicting Feature-Based and Rule-Based Generalization Processes

    ERIC Educational Resources Information Center

    Cobos, Pedro L.; Gutiérrez-Cobo, María J.; Morís, Joaquín; Luque, David

    2017-01-01

    In our study, we tested the hypothesis that feature-based and rule-based generalization involve different types of processes that may affect each other producing different results depending on time constraints and on how generalization is measured. For this purpose, participants in our experiments learned cue-outcome relationships that followed…

  13. Presenting Germany's drug pricing rule as a cost-per-QALY rule.

    PubMed

    Gandjour, Afschin

    2012-06-01

    In Germany, the Institute for Quality and Efficiency in Health Care (IQWiG) makes recommendations for ceiling prices of drugs based on an evaluation of the relationship between costs and effectiveness. To set ceiling prices, IQWiG uses the following decision rule: the incremental cost-effectiveness ratio of a new drug compared with the next effective intervention should not be higher than that of the next effective intervention compared to its comparator. The purpose of this paper is to show that IQWiG's decision rule can be presented as a cost-per-QALY rule by using equity-weighted QALYs. This transformation shows where both rules share commonalities. Furthermore, it makes the underlying ethical implications of IQWiG's decision rule transparent and open to debate.

  14. CARSVM: a class association rule-based classification framework and its application to gene expression data.

    PubMed

    Kianmehr, Keivan; Alhajj, Reda

    2008-09-01

    In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM

  15. A Hybrid Approach Using Case-Based Reasoning and Rule-Based Reasoning to Support Cancer Diagnosis: A Pilot Study.

    PubMed

    Saraiva, Renata M; Bezerra, João; Perkusich, Mirko; Almeida, Hyggo; Siebra, Clauirton

    2015-01-01

    Recently there has been an increasing interest in applying information technology to support the diagnosis of diseases such as cancer. In this paper, we present a hybrid approach using case-based reasoning (CBR) and rule-based reasoning (RBR) to support cancer diagnosis. We used symptoms, signs, and personal information from patients as inputs to our model. To form specialized diagnoses, we used rules to define the input factors' importance according to the patient's characteristics. The model's output presents the probability of the patient having a type of cancer. To carry out this research, we had the approval of the ethics committee at Napoleão Laureano Hospital, in João Pessoa, Brazil. To define our model's cases, we collected real patient data at Napoleão Laureano Hospital. To define our model's rules and weights, we researched specialized literature and interviewed health professional. To validate our model, we used K-fold cross validation with the data collected at Napoleão Laureano Hospital. The results showed that our approach is an effective CBR system to diagnose cancer.

  16. Mining association rule based on the diseases population for recommendation of medicine need

    NASA Astrophysics Data System (ADS)

    Harahap, M.; Husein, A. M.; Aisyah, S.; Lubis, F. R.; Wijaya, B. A.

    2018-04-01

    Selection of medicines that is inappropriate will lead to an empty result at medicines, this has an impact on medical services and economic value in hospital. The importance of an appropriate medicine selection process requires an automated way to select need based on the development of the patient's illness. In this study, we analyzed patient prescriptions to identify the relationship between the disease and the medicine used by the physician in treating the patient's illness. The analytical framework includes: (1) patient prescription data collection, (2) applying k-means clustering to classify the top 10 diseases, (3) applying Apriori algorithm to find association rules based on support, confidence and lift value. The results of the tests of patient prescription datasets in 2015-2016, the application of the k-means algorithm for the clustering of 10 dominant diseases significantly affects the value of trust and support of all association rules on the Apriori algorithm making it more consistent with finding association rules of disease and related medicine. The value of support, confidence and the lift value of disease and related medicine can be used as recommendations for appropriate medicine selection. Based on the conditions of disease progressions of the hospital, there is so more optimal medicine procurement.

  17. Comparison of a 28 Channel-Receive Array Coil and Quadrature Volume Coil for Morphologic Imaging and T2 Mapping of Knee Cartilage at 7 Tesla

    PubMed Central

    Chang, Gregory; Wiggins, Graham C.; Xia, Ding; Lattanzi, Riccardo; Madelin, Guillaume; Raya, Jose G.; Finnerty, Matthew; Fujita, Hiroyuki; Recht, Michael P.; Regatte, Ravinder R.

    2011-01-01

    Purpose To compare a new birdcage-transmit, 28 channel-receive array (28 Ch) coil and a quadrature volume coil for 7 Tesla morphologic MRI and T2 mapping of knee cartilage. Methods The right knees of ten healthy subjects were imaged on a 7 Tesla whole body MR scanner using both coils. 3-dimensional fast low-angle shot (3D-FLASH) and multi-echo spin-echo (MESE) sequences were implemented. Cartilage signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), thickness, and T2 values were assessed. Results SNR/CNR was 17–400% greater for the 28 Ch compared to the quadrature coil (p≤0.005). Bland-Altman plots show mean differences between measurements of tibial/femoral cartilage thickness and T2 values obtained with each coil to be small (−0.002±0.009 cm/0.003±0.011 cm) and large (−6.8±6.7 ms/−8.2±9.7 ms), respectively. For the 28 Ch coil, when parallel imaging with acceleration factors (AF) 2, 3, and 4 was performed, SNR retained was: 62–69%, 51–55%, and 39–45%. Conclusion A 28 Ch knee coil provides increased SNR/CNR for 7T cartilage morphologic imaging and T2 mapping. Coils should be switched with caution during clinical studies because T2 values may differ. The greater SNR of the 28 Ch coil could be used to perform parallel imaging with AF2 and obtain similar SNR as the quadrature coil. PMID:22095723

  18. Intertransaction Class Association Rule Mining Based on Genetic Network Programming and Its Application to Stock Market Prediction

    NASA Astrophysics Data System (ADS)

    Yang, Yuchen; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro

    Intertransaction association rules have been reported to be useful in many fields such as stock market prediction, but still there are not so many efficient methods to dig them out from large data sets. Furthermore, how to use and measure these more complex rules should be considered carefully. In this paper, we propose a new intertransaction class association rule mining method based on Genetic Network Programming (GNP), which has the ability to overcome some shortages of Apriori-like based intertransaction association methods. Moreover, a general classifier model for intertransaction rules is also introduced. In experiments on the real world application of stock market prediction, the method shows its efficiency and ability to obtain good results and can bring more benefits with a suitable classifier considering larger interval span.

  19. A SiGe Quadrature Pulse Modulator for Superconducting Qubit State Manipulation

    NASA Astrophysics Data System (ADS)

    Kwende, Randy; Bardin, Joseph

    Manipulation of the quantum states of microwave superconducting qubits typically requires the generation of coherent modulated microwave pulses. While many off-the-shelf instruments are capable of generating such pulses, a more integrated approach is likely required if fault-tolerant quantum computing architectures are to be implemented. In this work, we present progress towards a pulse generator specifically designed to drive superconducing qubits. The device is implemented in a commercial silicon process and has been designed with energy-efficiency and scalability in mind. Pulse generation is carried out using a unique approach in which modulation is applied directly to the in-phase and quadrature components of a carrier signal in the 1-10 GHz frequency range through a unique digital-analog conversion process designed specifically for this application. The prototype pulse generator can be digitally programmed and supports sequencing of pulses with independent amplitude and phase waveforms. These amplitude and phase waveforms can be digitally programmed through a serial programming interface. Detailed performance of the pulse generator at room temperature and 4 K will be presented.

  20. Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.

    PubMed

    Singh, Raj Mohan

    2008-04-01

    Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.

  1. Improved Personalized Recommendation Based on Causal Association Rule and Collaborative Filtering

    ERIC Educational Resources Information Center

    Lei, Wu; Qing, Fang; Zhou, Jin

    2016-01-01

    There are usually limited user evaluation of resources on a recommender system, which caused an extremely sparse user rating matrix, and this greatly reduce the accuracy of personalized recommendation, especially for new users or new items. This paper presents a recommendation method based on rating prediction using causal association rules.…

  2. A CMOS Self-Contained Quadrature Signal Generator for SoC Impedance Spectroscopy.

    PubMed

    Márquez, Alejandro; Pérez-Bailón, Jorge; Calvo, Belén; Medrano, Nicolás; Martínez, Pedro A

    2018-04-30

    This paper presents a low-power fully integrated quadrature signal generator for system-on-chip (SoC) impedance spectroscopy applications. It has been designed in a 0.18 μm-1.8 V CMOS technology as a self-contained oscillator, without the need for an external reference clock. The frequency can be digitally tuned from 10 to 345 kHz with 12-bit accuracy and a relative mean error below 1.7%, thus supporting a wide range of impedance sensing applications. The proposal is experimentally validated in two impedance spectrometry examples, achieving good magnitude and phase recovery results compared to the results obtained using a commercial LCR-meter. Besides the wide frequency tuning range, the proposed programmable oscillator features a total power consumption lower than 0.77 mW and an active area of 0.129 mm², thus constituting a highly suitable choice as stimulation module for instrument-on-a-chip devices.

  3. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    PubMed

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.

  4. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  5. Traditional versus rule-based programming techniques: Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.

  6. Wide and Narrow CMEs and Their Source Explosions Observed at the Spring 2003 SOHO-Sun-Ulysses Quadrature

    NASA Technical Reports Server (NTRS)

    Suess, Steven; Corti, G.; Poletto, G.; Sterling, A.; Moore, R.

    2006-01-01

    At the time of the spring 2003 Ulysses-SOHO-Sun quadrature, Ulysses was off the East limb of the Sun at 14.5 degrees north latitude and 4.91 AU. LASCO/C2 images show small transient events that originated from near the limb on May 25, 26 and 27 in the north-east quadrant, along with a large Coronal Mass Ejection (CME) that originated from an active region near disk center on May 26. Ulysses data bear clear signatures of the large CME, specifically including an enhanced abundance of highly ionized Fe. SOHO/UVCS spectra at 1.75 solar radii, near the radial direction to Ulysses, give no evidence of emission from high temperature lines, even for the large CME: instead, for the small events, occasional transient high emission in cool lines was observed, such as the CIII 977 Angstrom line usually absent at coronal levels. Each of these events lasted ca. 1 hour or less and never affected lines from ions forming above ca. 106K. Compact eruptions in Helium 304 Angstrom EIT images, related to the small UVCS transients, were observed at the limb of the Sun over the same period. At least one of these surge events produced a narrow CME observed in LASCO/C2. Most probably all these events are compact magnetic explosions (surges/jets, from around a small island of included polarity) which ejected cool material from lower levels. Ulysses data have been analyzed to find evidence of the cool, narrow CME events, but none or little was found. This puzzling scenario, where events seen by UVCS have no in situ counterparts and vice versa, can be partially explained once the region where the large CME originated is recognized as being at the center of the solar disk so that the CME material was actually much further from the Sun than the 1.7 Rsun height of the UVCS slit off the limb. Conversely, the narrow events may simply have missed Ulysses or been too brief for reliable signatures in composition and ionization state. A basic feature demonstrated by these observations is that large

  7. Rule Based System for Medicine Inventory Control Using Radio Frequency Identification (RFID)

    NASA Astrophysics Data System (ADS)

    Nugraha, Joanna Ardhyanti Mita; Suryono; Suseno, dan Jatmiko Endro

    2018-02-01

    Rule based system is very efficient to ensure stock of drug to remain available by utilizing Radio Frequency Identification (RFID) as input means automatically. This method can ensure the stock of drugs to remain available by analyzing the needs of drug users. The research data was the amount of drug usage in hospital for 1 year. The data was processed by using ABC classification to determine the drug with fast, medium and slow movement. In each classification result, rule based algorithm was given for determination of safety stock and Reorder Point (ROP). This research yielded safety stock and ROP values that vary depending on the class of each drug. Validation is done by comparing the calculation of safety stock and reorder point both manually and by system, then, it was found that the mean deviation value at safety stock was 0,03 and and ROP was 0,08.

  8. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  9. Neural activity in superior parietal cortex during rule-based visual-motor transformations.

    PubMed

    Hawkins, Kara M; Sayegh, Patricia; Yan, Xiaogang; Crawford, J Douglas; Sergio, Lauren E

    2013-03-01

    Cognition allows for the use of different rule-based sensorimotor strategies, but the neural underpinnings of such strategies are poorly understood. The purpose of this study was to compare neural activity in the superior parietal lobule during a standard (direct interaction) reaching task, with two nonstandard (gaze and reach spatially incongruent) reaching tasks requiring the integration of rule-based information. Specifically, these nonstandard tasks involved dissociating the planes of reach and vision or rotating visual feedback by 180°. Single unit activity, gaze, and reach trajectories were recorded from two female Macaca mulattas. In all three conditions, we observed a temporal discharge pattern at the population level reflecting early reach planning and on-line reach monitoring. In the plane-dissociated task, we found a significant overall attenuation in the discharge rate of cells from deep recording sites, relative to standard reaching. We also found that cells modulated by reach direction tended to be significantly tuned either during the standard or the plane-dissociated task but rarely during both. In the standard versus feedback reversal comparison, we observed some cells that shifted their preferred direction by 180° between conditions, reflecting maintenance of directional tuning with respect to the reach goal. Our findings suggest that the superior parietal lobule plays an important role in processing information about the nonstandard nature of a task, which, through reciprocal connections with precentral motor areas, contributes to the accurate transformation of incongruent sensory inputs into an appropriate motor output. Such processing is crucial for the integration of rule-based information into a motor act.

  10. Conditioning of high voltage radio frequency cavities by using fuzzy logic in connection with rule based programming

    NASA Astrophysics Data System (ADS)

    Perreard, S.; Wildner, E.

    1994-12-01

    Many processes are controlled by experts using some kind of mental model to decide on actions and make conclusions. This model, based on heuristic knowledge, can often be represented by rules and does not have to be particularly accurate. Such is the case for the problem of conditioning high voltage RF cavities; the expert has to decide, by observing some criteria, whether to increase or to decrease the voltage and by how much. A program has been implemented which can be applied to a class of similar problems. The kernel of the program is a small rule base, which is independent of the kind of cavity. To model a specific cavity, we use fuzzy logic which is implemented as a separate routine called by the rule base, to translate from numeric to symbolic information.

  11. Rule-Based Reasoning Is Fast and Belief-Based Reasoning Can Be Slow: Challenging Current Explanations of Belief-Bias and Base-Rate Neglect

    ERIC Educational Resources Information Center

    Newman, Ian R.; Gibb, Maia; Thompson, Valerie A.

    2017-01-01

    It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this…

  12. Association Rule Based Feature Extraction for Character Recognition

    NASA Astrophysics Data System (ADS)

    Dua, Sumeet; Singh, Harpreet

    Association rules that represent isomorphisms among data have gained importance in exploratory data analysis because they can find inherent, implicit, and interesting relationships among data. They are also commonly used in data mining to extract the conditions among attribute values that occur together frequently in a dataset [1]. These rules have wide range of applications, namely in the financial and retail sectors of marketing, sales, and medicine.

  13. Stress fields around two pores in an elastic body: exact quadrature domain solutions.

    PubMed

    Crowdy, Darren

    2015-08-08

    Analytical solutions are given for the stress fields, in both compression and far-field shear, in a two-dimensional elastic body containing two interacting non-circular pores. The two complex potentials governing the solutions are found by using a conformal mapping from a pre-image annulus with those potentials expressed in terms of the Schottky-Klein prime function for the annulus. Solutions for a three-parameter family of elastic bodies with two equal symmetric pores are presented and the compressibility of a special family of pore pairs is studied in detail. The methodology extends to two unequal pores. The importance for boundary value problems of plane elasticity of a special class of planar domains known as quadrature domains is also elucidated. This observation provides the route to generalization of the mathematical approach here to finding analytical solutions for the stress fields in bodies containing any finite number of pores.

  14. Overcoming rule-based rigidity and connectionist limitations through massively-parallel case-based reasoning

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Symbol manipulation as used in traditional Artificial Intelligence has been criticized by neural net researchers for being excessively inflexible and sequential. On the other hand, the application of neural net techniques to the types of high-level cognitive processing studied in traditional artificial intelligence presents major problems as well. A promising way out of this impasse is to build neural net models that accomplish massively parallel case-based reasoning. Case-based reasoning, which has received much attention recently, is essentially the same as analogy-based reasoning, and avoids many of the problems leveled at traditional artificial intelligence. Further problems are avoided by doing many strands of case-based reasoning in parallel, and by implementing the whole system as a neural net. In addition, such a system provides an approach to some aspects of the problems of noise, uncertainty and novelty in reasoning systems. The current neural net system (Conposit), which performs standard rule-based reasoning, is being modified into a massively parallel case-based reasoning version.

  15. Image segmentation using association rule features.

    PubMed

    Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J

    2002-01-01

    A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.

  16. Rule groupings: An approach towards verification of expert systems

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala

    1991-01-01

    Knowledge-based expert systems are playing an increasingly important role in NASA space and aircraft systems. However, many of NASA's software applications are life- or mission-critical and knowledge-based systems do not lend themselves to the traditional verification and validation techniques for highly reliable software. Rule-based systems lack the control abstractions found in procedural languages. Hence, it is difficult to verify or maintain such systems. Our goal is to automatically structure a rule-based system into a set of rule-groups having a well-defined interface to other rule-groups. Once a rule base is decomposed into such 'firewalled' units, studying the interactions between rules would become more tractable. Verification-aid tools can then be developed to test the behavior of each such rule-group. Furthermore, the interactions between rule-groups can be studied in a manner similar to integration testing. Such efforts will go a long way towards increasing our confidence in the expert-system software. Our research efforts address the feasibility of automating the identification of rule groups, in order to decompose the rule base into a number of meaningful units.

  17. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    NASA Astrophysics Data System (ADS)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  18. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    PubMed

    van Ginneken, Bram

    2017-03-01

    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  19. Implementing a Rule-Based Contract Compliance Checker

    NASA Astrophysics Data System (ADS)

    Strano, Massimo; Molina-Jimenez, Carlos; Shrivastava, Santosh

    The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.

  20. MR imaging of the inner ear: comparison of a three-dimensional fast spin-echo sequence with use of a dedicated quadrature-surface coil with a gadolinium-enhanced spoiled gradient-recalled sequence.

    PubMed

    Naganawa, S; Ito, T; Fukatsu, H; Ishigaki, T; Nakashima, T; Ichinose, N; Kassai, Y; Miyazaki, M

    1998-09-01

    To prospectively evaluate the sensitivity and specificity of magnetic resonance (MR) imaging in the inner ear with a long echo train, three-dimensional (3D), asymmetric Fourier-transform, fast spin-echo (SE) sequence with use of a dedicated quadrature-surface phased-array coil to detect vestibular schwannoma in the cerebellopontine angle and the internal auditory canal. In 205 patients (410 ears) with ear symptoms, 1.5-T MR imaging was performed with unenhanced 3D asymmetric fast SE and gadolinium-enhanced 3D gradient-recalled (SPGR) sequences with use of a quadrature surface phased-array coil. The 3D asymmetric fast SE images were reviewed by two radiologists, with the gadolinium-enhanced 3D SPGR images used as the standard of reference. Nineteen lesions were detected in the 410 ears (diameter range, 2-30 mm; mean, 10.5 mm +/- 6.4 [standard deviation]; five lesions were smaller than 5 mm). With 3D asymmetric fast SE, sensitivity, specificity, and accuracy, respectively, were 100%, 99.5%, and 99.5% for observer 1 and 100%, 99.7%, and 99.8% for observer 2. The unenhanced 3D asymmetric fast SE sequence with a quadrature-surface phased-array coli allows the reliable detection of vestibular schwannoma in the cerebellopontine angle and internal auditory canal.

  1. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and

  2. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10

  3. Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore

    PubMed Central

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  4. Using new aggregation operators in rule-based intelligent control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.

    1990-01-01

    A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.

  5. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  6. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  7. Merit-Based Incentive Payment System: Meaningful Changes in the Final Rule Brings Cautious Optimism.

    PubMed

    Manchikanti, Laxmaiah; Helm Ii, Standiford; Calodney, Aaron K; Hirsch, Joshua A

    2017-01-01

    The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) eliminated the flawed Sustainable Growth Rate (SGR) act formula - a longstanding crucial issue of concern for health care providers and Medicare beneficiaries. MACRA also included a quality improvement program entitled, "The Merit-Based Incentive Payment System, or MIPS." The proposed rule of MIPS sought to streamline existing federal quality efforts and therefore linked 4 distinct programs into one. Three existing programs, meaningful use (MU), Physician Quality Reporting System (PQRS), value-based payment (VBP) system were merged with the addition of Clinical Improvement Activity category. The proposed rule also changed the name of MU to Advancing Care Information, or ACI. ACI contributes to 25% of composite score of the four programs, PQRS contributes 50% of the composite score, while VBP system, which deals with resource use or cost, contributes to 10% of the composite score. The newest category, Improvement Activities or IA, contributes 15% to the composite score. The proposed rule also created what it called a design incentive that drives movement to delivery system reform principles with the inclusion of Advanced Alternative Payment Models (APMs).Following the release of the proposed rule, the medical community, as well as Congress, provided substantial input to Centers for Medicare and Medicaid Services (CMS),expressing their concern. American Society of Interventional Pain Physicians (ASIPP) focused on 3 important aspects: delay the implementation, provide a 3-month performance period, and provide ability to submit meaningful quality measures in a timely and economic manner. The final rule accepted many of the comments from various organizations, including several of those specifically emphasized by ASIPP, with acceptance of 3-month reporting period, as well as the ability to submit non-MIPS measures to improve real quality and make the system meaningful. CMS also provided a mechanism for

  8. 3-D frequency-domain seismic wave modelling in heterogeneous, anisotropic media using a Gaussian quadrature grid approach

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-01-01

    We present an extension of the 3-D spectral element method (SEM), called the Gaussian quadrature grid (GQG) approach, to simulate in the frequency-domain seismic waves in 3-D heterogeneous anisotropic media involving a complex free-surface topography and/or sub-surface geometry. It differs from the conventional SEM in two ways. The first is the replacement of the hexahedral element mesh with 3-D Gaussian quadrature abscissae to directly sample the physical properties or model parameters. This gives a point-gridded model which more exactly and easily matches the free-surface topography and/or any sub-surface interfaces. It does not require that the topography be highly smooth, a condition required in the curved finite difference method and the spectral method. The second is the derivation of a complex-valued elastic tensor expression for the perfectly matched layer (PML) model parameters for a general anisotropic medium, whose imaginary parts are determined by the PML formulation rather than having to choose a specific class of viscoelastic material. Furthermore, the new formulation is much simpler than the time-domain-oriented PML implementation. The specified imaginary parts of the density and elastic moduli are valid for arbitrary anisotropic media. We give two numerical solutions in full-space homogeneous, isotropic and anisotropic media, respectively, and compare them with the analytical solutions, as well as show the excellent effectiveness of the PML model parameters. In addition, we perform numerical simulations for 3-D seismic waves in a heterogeneous, anisotropic model incorporating a free-surface ridge topography and validate the results against the 2.5-D modelling solution, and demonstrate the capability of the approach to handle realistic situations.

  9. Does GEM-Encoding Clinical Practice Guidelines Improve the Quality of Knowledge Bases? A Study with the Rule-Based Formalism

    PubMed Central

    Georg, Gersende; Séroussi, Brigitte; Bouaud, Jacques

    2003-01-01

    The aim of this work was to determine whether the GEM-encoding step could improve the representation of clinical practice guidelines as formalized knowledge bases. We used the 1999 Canadian recommendations for the management of hypertension, chosen as the knowledge source in the ASTI project. We first clarified semantic ambiguities of therapeutic sequences recommended in the guideline by proposing an interpretative framework of therapeutic strategies. Then, after a formalization step to standardize the terms used to characterize clinical situations, we created the GEM-encoded instance of the guideline. We developed a module for the automatic derivation of a rule base, BR-GEM, from the instance. BR-GEM was then compared to the rule base, BR-ASTI, embedded within the critic mode of ASTI, and manually built by two physicians from the same Canadian guideline. As compared to BR-ASTI, BR-GEM is more specific and covers more clinical situations. When evaluated on 10 patient cases, the GEM-based approach led to promising results. PMID:14728173

  10. Does GEM-encoding clinical practice guidelines improve the quality of knowledge bases? A study with the rule-based formalism.

    PubMed

    Georg, Georg; Séroussi, Brigitte; Bouaud, Jacques

    2003-01-01

    The aim of this work was to determine whether the GEM-encoding step could improve the representation of clinical practice guidelines as formalized knowledge bases. We used the 1999 Canadian recommendations for the management of hypertension, chosen as the knowledge source in the ASTI project. We first clarified semantic ambiguities of therapeutic sequences recommended in the guideline by proposing an interpretative framework of therapeutic strategies. Then, after a formalization step to standardize the terms used to characterize clinical situations, we created the GEM-encoded instance of the guideline. We developed a module for the automatic derivation of a rule base, BR-GEM, from the instance. BR-GEM was then compared to the rule base, BR-ASTI, embedded within the critic mode of ASTI, and manually built by two physicians from the same Canadian guideline. As compared to BR-ASTI, BR-GEM is more specific and covers more clinical situations. When evaluated on 10 patient cases, the GEM-based approach led to promising results.

  11. Sensor-based activity recognition using extended belief rule-based inference methodology.

    PubMed

    Calzada, A; Liu, J; Nugent, C D; Wang, H; Martinez, L

    2014-01-01

    The recently developed extended belief rule-based inference methodology (RIMER+) recognizes the need of modeling different types of information and uncertainty that usually coexist in real environments. A home setting with sensors located in different rooms and on different appliances can be considered as a particularly relevant example of such an environment, which brings a range of challenges for sensor-based activity recognition. Although RIMER+ has been designed as a generic decision model that could be applied in a wide range of situations, this paper discusses how this methodology can be adapted to recognize human activities using binary sensors within smart environments. The evaluation of RIMER+ against other state-of-the-art classifiers in terms of accuracy, efficiency and applicability was found to be significantly relevant, specially in situations of input data incompleteness, and it demonstrates the potential of this methodology and underpins the basis to develop further research on the topic.

  12. SIRE: A Simple Interactive Rule Editor for NICBES

    NASA Technical Reports Server (NTRS)

    Bykat, Alex

    1988-01-01

    To support evolution of domain expertise, and its representation in an expert system knowledge base, a user-friendly rule base editor is mandatory. The Nickel Cadmium Battery Expert System (NICBES), a prototype of an expert system for the Hubble Space Telescope power storage management system, does not provide such an editor. In the following, a description of a Simple Interactive Rule Base Editor (SIRE) for NICBES is described. The SIRE provides a consistent internal representation of the NICBES knowledge base. It supports knowledge presentation and provides a user-friendly and code language independent medium for rule addition and modification. The SIRE is integrated with NICBES via an interface module. This module provides translation of the internal representation to Prolog-type rules (Horn clauses), latter rule assertion, and a simple mechanism for rule selection for its Prolog inference engine.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to themore » good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights

  14. Design of a new low-phase-noise millimetre-wave quadrature voltage-controlled oscillator

    NASA Astrophysics Data System (ADS)

    Kashani, Zeinab; Nabavi, Abdolreza

    2018-07-01

    This paper presents a new circuit topology of millimetre-wave quadrature voltage-controlled oscillator (QVCO) using an improved Colpitts oscillator without tail bias. By employing an extra capacitance between the drain and source terminations of the transistors and optimising circuit values, a low-power and low-phase-noise (PN) oscillator is designed. For generating the output signals with 90° phase difference, a self-injection coupling network between two identical cores is used. The proposed QVCO dissipates no extra dc power for coupling, since there is no dc-path to ground for the coupled transistors and no extra noise is added to circuit. The best figure-of-merit is -188.5, the power consumption is 14.98-15.45 mW, in a standard 180-nm CMOS technology, for 58.2 GHz center frequency from 59.3 to 59.6 GHz. The PN is -104.86 dBc/Hz at 1-MHz offset.

  15. Digital services using quadrature amplitude modulation (QAM) over CATV analog DWDM system

    NASA Astrophysics Data System (ADS)

    Yeh, JengRong; Selker, Mark D.; Trail, J.; Piehler, David; Levi, Israel

    2000-04-01

    Dense Wavelength Division Multiplexing (DWDM) has recently gained great popularity as it provides a cost effective way to increase the transmission capacity of the existing fiber cable plant. For a long time, Dense WDM was exclusively used for baseband digital applications, predominantly in terrestrial long haul networks and in some cases in metropolitan and enterprise networks. Recently, the performance of DWDM components and frequency-stabilized lasers has substantially improved while the costs have down significantly. This makes a variety of new optical network architectures economically viable. The first commercial 8- wavelength DWDM system designed for Hybrid Fiber Coax networks was reported in 1998. This type of DWDM system utilizes Sub-Carrier Multiplexing (SCM) of Quadrature Amplitude Modulated (QAM) signals to transport IP data digital video broadcast and Video on Demand on ITU grid lightwave carriers. The ability of DWDM to provide scalable transmission capacity in the optical layer with SCM granularity is now considered by many to be the most promising technology for future transport and distribution of broadband multimedia services.

  16. Taking a gamble or playing by the rules: Dissociable prefrontal systems implicated in probabilistic versus deterministic rule-based decisions

    PubMed Central

    Bhanji, Jamil P.; Beer, Jennifer S.; Bunge, Silvia A.

    2014-01-01

    A decision may be difficult because complex information processing is required to evaluate choices according to deterministic decision rules and/or because it is not certain which choice will lead to the best outcome in a probabilistic context. Factors that tax decision making such as decision rule complexity and low decision certainty should be disambiguated for a more complete understanding of the decision making process. Previous studies have examined the brain regions that are modulated by decision rule complexity or by decision certainty but have not examined these factors together in the context of a single task or study. In the present functional magnetic resonance imaging study, both decision rule complexity and decision certainty were varied in comparable decision tasks. Further, the level of certainty about which choice to make (choice certainty) was varied separately from certainty about the final outcome resulting from a choice (outcome certainty). Lateral prefrontal cortex, dorsal anterior cingulate cortex, and bilateral anterior insula were modulated by decision rule complexity. Anterior insula was engaged more strongly by low than high choice certainty decisions, whereas ventromedial prefrontal cortex showed the opposite pattern. These regions showed no effect of the independent manipulation of outcome certainty. The results disambiguate the influence of decision rule complexity, choice certainty, and outcome certainty on activity in diverse brain regions that have been implicated in decision making. Lateral prefrontal cortex plays a key role in implementing deterministic decision rules, ventromedial prefrontal cortex in probabilistic rules, and anterior insula in both. PMID:19781652

  17. Fuzzy rule-based forecast of meteorological drought in western Niger

    NASA Astrophysics Data System (ADS)

    Abdourahamane, Zakari Seybou; Acar, Reşat

    2018-01-01

    Understanding the causes of rainfall anomalies in the West African Sahel to effectively predict drought events remains a challenge. The physical mechanisms that influence precipitation in this region are complex, uncertain, and imprecise in nature. Fuzzy logic techniques are renowned to be highly efficient in modeling such dynamics. This paper attempts to forecast meteorological drought in Western Niger using fuzzy rule-based modeling techniques. The 3-month scale standardized precipitation index (SPI-3) of four rainfall stations was used as predictand. Monthly data of southern oscillation index (SOI), South Atlantic sea surface temperature (SST), relative humidity (RH), and Atlantic sea level pressure (SLP), sourced from the National Oceanic and Atmosphere Administration (NOAA), were used as predictors. Fuzzy rules and membership functions were generated using fuzzy c-means clustering approach, expert decision, and literature review. For a minimum lead time of 1 month, the model has a coefficient of determination R 2 between 0.80 and 0.88, mean square error (MSE) below 0.17, and Nash-Sutcliffe efficiency (NSE) ranging between 0.79 and 0.87. The empirical frequency distributions of the predicted and the observed drought classes are equal at the 99% of confidence level based on two-sample t test. Results also revealed the discrepancy in the influence of SOI and SLP on drought occurrence at the four stations while the effect of SST and RH are space independent, being both significantly correlated (at α < 0.05 level) to the SPI-3. Moreover, the implemented fuzzy model compared to decision tree-based forecast model shows better forecast skills.

  18. Hyper-heuristic Evolution of Dispatching Rules: A Comparison of Rule Representations.

    PubMed

    Branke, Jürgen; Hildebrandt, Torsten; Scholz-Reiter, Bernd

    2015-01-01

    Dispatching rules are frequently used for real-time, online scheduling in complex manufacturing systems. Design of such rules is usually done by experts in a time consuming trial-and-error process. Recently, evolutionary algorithms have been proposed to automate the design process. There are several possibilities to represent rules for this hyper-heuristic search. Because the representation determines the search neighborhood and the complexity of the rules that can be evolved, a suitable choice of representation is key for a successful evolutionary algorithm. In this paper we empirically compare three different representations, both numeric and symbolic, for automated rule design: A linear combination of attributes, a representation based on artificial neural networks, and a tree representation. Using appropriate evolutionary algorithms (CMA-ES for the neural network and linear representations, genetic programming for the tree representation), we empirically investigate the suitability of each representation in a dynamic stochastic job shop scenario. We also examine the robustness of the evolved dispatching rules against variations in the underlying job shop scenario, and visualize what the rules do, in order to get an intuitive understanding of their inner workings. Results indicate that the tree representation using an improved version of genetic programming gives the best results if many candidate rules can be evaluated, closely followed by the neural network representation that already leads to good results for small to moderate computational budgets. The linear representation is found to be competitive only for extremely small computational budgets.

  19. Combining tabular, rule-based, and procedural knowledge in computer-based guidelines for childhood immunization.

    PubMed

    Miller, P L; Frawley, S J; Sayward, F G; Yasnoff, W A; Duncan, L; Fleming, D W

    1997-06-01

    IMM/Serve is a computer program which implements the clinical guidelines for childhood immunization. IMM/Serve accepts as input a child's immunization history. It then indicates which vaccinations are due and which vaccinations should be scheduled next. The clinical guidelines for immunization are quite complex and are modified quite frequently. As a result, it is important that IMM/Serve's knowledge be represented in a format that facilitates the maintenance of that knowledge as the field evolves over time. To achieve this goal, IMM/Serve uses four representations for different parts of its knowledge base: (1) Immunization forecasting parameters that specify the minimum ages and wait-intervals for each dose are stored in tabular form. (2) The clinical logic that determines which set of forecasting parameters applies for a particular patient in each vaccine series is represented using if-then rules. (3) The temporal logic that combines dates, ages, and intervals to calculate recommended dates, is expressed procedurally. (4) The screening logic that checks each previous dose for validity is performed using a decision table that combines minimum ages and wait intervals with a small amount of clinical logic. A knowledge maintenance tool, IMM/Def, has been developed to help maintain the rule-based logic. The paper describes the design of IMM/Serve and the rationale and role of the different forms of knowledge used.

  20. Orthogonal search-based rule extraction for modelling the decision to transfuse.

    PubMed

    Etchells, T A; Harrison, M J

    2006-04-01

    Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-based rule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb < 94 g x l(-1); 2. ROTH > 13 mm and Hb < 87 g x l(-1); 3. ROTH > 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).

  1. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    NASA Astrophysics Data System (ADS)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  2. Rule-based exposure assessment versus case-by-case expert assessment using the same information in a community-based study.

    PubMed

    Peters, Susan; Glass, Deborah C; Milne, Elizabeth; Fritschi, Lin

    2014-03-01

    Retrospective exposure assessment in community-based studies is largely reliant on questionnaire information. Expert assessment is often used to assess lifetime occupational exposures, but these assessments generally lack transparency and are very time-consuming. We explored the agreement between a rule-based assessment approach and case-by-case expert assessment of occupational exposures in a community-based study. We used data from a case-control study of childhood acute lymphoblastic leukaemia in which parental occupational exposures were originally assigned by expert assessment. Key questions were identified from the completed parent questionnaires and, on the basis of these, rules were written to assign exposure levels to diesel exhaust, pesticides and solvents. We estimated exposure prevalence separately for fathers and mothers, and used κ statistics to assess the agreement between the two exposure assessment methods. Exposures were assigned to 5829 jobs among 1079 men and 6189 jobs among 1234 women. For both sexes, agreement was good for the two assessment methods of exposure to diesel exhaust at a job level (κ=0.70 for men and κ=0.71 for women) and at a person level (κ=0.74 and κ=0.75). The agreement was good to excellent for pesticide exposure among men (κ=0.74 for jobs and κ=0.84 at a person level) and women (κ=0.68 and κ=0.71 at a job and person level, respectively). Moderate to good agreement was observed for assessment of solvent exposure, which was better for women than men. The rule-based assessment approach appeared to be an efficient alternative for assigning occupational exposures in a community-based study for a selection of occupational exposures.

  3. Updating OSHA standards based on national consensus standards. Direct final rule.

    PubMed

    2007-12-14

    In this direct final rule, the Agency is removing several references to consensus standards that have requirements that duplicate, or are comparable to, other OSHA rules; this action includes correcting a paragraph citation in one of these OSHA rules. The Agency also is removing a reference to American Welding Society standard A3.0-1969 ("Terms and Definitions") in its general-industry welding standards. This rulemaking is a continuation of OSHA's ongoing effort to update references to consensus and industry standards used throughout its rules.

  4. Diameter measurement of optical nanofiber based on high-order Bragg reflections using a ruled grating.

    PubMed

    Zhu, Ming; Wang, Yao-Ting; Sun, Yi-Zhi; Zhang, Lijian; Ding, Wei

    2018-02-01

    A convenient method using a commercially available ruled grating for precise and overall diameter measurement of optical nanofibers (ONFs) is presented. We form a composite Bragg reflector with a micronscale period by dissolving aluminum coating, slicing the grating along ruling lines, and mounting it on an ONF. The resonant wavelengths of high-order Bragg reflections possess fiber diameter dependence, enabling nondestructive measurement of the ONF diameter profile. This method provides an easy and economic diagnostic tool for wide varieties of ONF-based applications.

  5. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  6. Stabilizing operation point technique based on the tunable distributed feedback laser for interferometric sensors

    NASA Astrophysics Data System (ADS)

    Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu

    2016-02-01

    We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.

  7. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks.

    PubMed

    Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non

  8. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks

    PubMed Central

    Wu, Chenxue; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non

  9. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning

    PubMed Central

    Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.

    2015-01-01

    We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  10. Mining algorithm for association rules in big data based on Hadoop

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Wang, Xiaojing; Zhang, Lijun; Qiao, Liying

    2018-04-01

    In order to solve the problem that the traditional association rules mining algorithm has been unable to meet the mining needs of large amount of data in the aspect of efficiency and scalability, take FP-Growth as an example, the algorithm is realized in the parallelization based on Hadoop framework and Map Reduce model. On the basis, it is improved using the transaction reduce method for further enhancement of the algorithm's mining efficiency. The experiment, which consists of verification of parallel mining results, comparison on efficiency between serials and parallel, variable relationship between mining time and node number and between mining time and data amount, is carried out in the mining results and efficiency by Hadoop clustering. Experiments show that the paralleled FP-Growth algorithm implemented is able to accurately mine frequent item sets, with a better performance and scalability. It can be better to meet the requirements of big data mining and efficiently mine frequent item sets and association rules from large dataset.

  11. Generating Concise Rules for Human Motion Retrieval

    NASA Astrophysics Data System (ADS)

    Mukai, Tomohiko; Wakisaka, Ken-Ichi; Kuriyama, Shigeru

    This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.

  12. Object-Driven and Temporal Action Rules Mining

    ERIC Educational Resources Information Center

    Hajja, Ayman

    2013-01-01

    In this thesis, I present my complete research work in the field of action rules, more precisely object-driven and temporal action rules. The drive behind the introduction of object-driven and temporally based action rules is to bring forth an adapted approach to extract action rules from a subclass of systems that have a specific nature, in which…

  13. 76 FR 76815 - Business Opportunity Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-08

    ...The Commission is adopting final amendments to its Trade Regulation Rule entitled ``Disclosure Requirements and Prohibitions Concerning Business Opportunities'' (``Business Opportunity Rule'' or ``Rule''). Among other things, the Business Opportunity Rule has been amended to broaden its scope to cover business opportunity sellers not covered by the interim Business Opportunity Rule, such as sellers of work-at-home opportunities, and to streamline and simplify the disclosures that sellers must provide to prospective purchasers. The final Rule is based upon the comments received in response to an Advance Notice of Proposed Rulemaking (``ANPR''), an Initial Notice of Proposed Rulemaking (``INPR''), a Revised Notice of Proposed Rulemaking (``RNPR''), a public workshop, a Staff Report, and other information discussed herein. This document also contains the text of the final Rule and the Rule's Statement of Basis and Purpose (``SBP''), including a Regulatory Analysis.

  14. A comprehensive revisit of the ρ meson with improved Monte-Carlo based QCD sum rules

    NASA Astrophysics Data System (ADS)

    Wang, Qi-Nan; Zhang, Zhu-Feng; Steele, T. G.; Jin, Hong-Ying; Huang, Zhuo-Ran

    2017-07-01

    We improve the Monte-Carlo based QCD sum rules by introducing the rigorous Hölder-inequality-determined sum rule window and a Breit-Wigner type parametrization for the phenomenological spectral function. In this improved sum rule analysis methodology, the sum rule analysis window can be determined without any assumptions on OPE convergence or the QCD continuum. Therefore, an unbiased prediction can be obtained for the phenomenological parameters (the hadronic mass and width etc.). We test the new approach in the ρ meson channel with re-examination and inclusion of α s corrections to dimension-4 condensates in the OPE. We obtain results highly consistent with experimental values. We also discuss the possible extension of this method to some other channels. Supported by NSFC (11175153, 11205093, 11347020), Open Foundation of the Most Important Subjects of Zhejiang Province, and K. C. Wong Magna Fund in Ningbo University, TGS is Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Z. F. Zhang and Z. R. Huang are Grateful to the University of Saskatchewan for its Warm Hospitality

  15. Deficits in Category Learning in Older Adults: Rule-Based Versus Clustering Accounts

    PubMed Central

    2017-01-01

    Memory research has long been one of the key areas of investigation for cognitive aging researchers but only in the last decade or so has categorization been used to understand age differences in cognition. Categorization tasks focus more heavily on the grouping and organization of items in memory, and often on the process of learning relationships through trial and error. Categorization studies allow researchers to more accurately characterize age differences in cognition: whether older adults show declines in the way in which they represent categories with simple rules or declines in representing categories by similarity to past examples. In the current study, young and older adults participated in a set of classic category learning problems, which allowed us to distinguish between three hypotheses: (a) rule-complexity: categories were represented exclusively with rules and older adults had differential difficulty when more complex rules were required, (b) rule-specific: categories could be represented either by rules or by similarity, and there were age deficits in using rules, and (c) clustering: similarity was mainly used and older adults constructed a less-detailed representation by lumping more items into fewer clusters. The ordinal levels of performance across different conditions argued against rule-complexity, as older adults showed greater deficits on less complex categories. The data also provided evidence against rule-specificity, as single-dimensional rules could not explain age declines. Instead, computational modeling of the data indicated that older adults utilized fewer conceptual clusters of items in memory than did young adults. PMID:28816474

  16. Rule-based optimization and multicriteria decision support for packaging a truck chassis

    NASA Astrophysics Data System (ADS)

    Berger, Martin; Lindroth, Peter; Welke, Richard

    2017-06-01

    Trucks are highly individualized products where exchangeable parts are flexibly combined to suit different customer requirements, this leading to a great complexity in product development. Therefore, an optimization approach based on constraint programming is proposed for automatically packaging parts of a truck chassis by following packaging rules expressed as constraints. A multicriteria decision support system is developed where a database of truck layouts is computed, among which interactive navigation then can be performed. The work has been performed in cooperation with Volvo Group Trucks Technology (GTT), from which specific rules have been used. Several scenarios are described where the methods developed can be successfully applied and lead to less time-consuming manual work, fewer mistakes, and greater flexibility in configuring trucks. A numerical evaluation is also presented showing the efficiency and practical relevance of the methods, which are implemented in a software tool.

  17. Long-Term Homeostatic Properties Complementary to Hebbian Rules in CuPc-Based Multifunctional Memristor

    NASA Astrophysics Data System (ADS)

    Wang, Laiyuan; Wang, Zhiyong; Lin, Jinyi; Yang, Jie; Xie, Linghai; Yi, Mingdong; Li, Wen; Ling, Haifeng; Ou, Changjin; Huang, Wei

    2016-10-01

    Most simulations of neuroplasticity in memristors, which are potentially used to develop artificial synapses, are confined to the basic biological Hebbian rules. However, the simplex rules potentially can induce excessive excitation/inhibition, even collapse of neural activities, because they neglect the properties of long-term homeostasis involved in the frameworks of realistic neural networks. Here, we develop organic CuPc-based memristors of which excitatory and inhibitory conductivities can implement both Hebbian rules and homeostatic plasticity, complementary to Hebbian patterns and conductive to the long-term homeostasis. In another adaptive situation for homeostasis, in thicker samples, the overall excitement under periodic moderate stimuli tends to decrease and be recovered under intense inputs. Interestingly, the prototypes can be equipped with bio-inspired habituation and sensitization functions outperforming the conventional simplified algorithms. They mutually regulate each other to obtain the homeostasis. Therefore, we develop a novel versatile memristor with advanced synaptic homeostasis for comprehensive neural functions.

  18. Context-based tourism information filtering with a semantic rule engine.

    PubMed

    Lamsfus, Carlos; Martin, David; Alzua-Sorzabal, Aurkene; López-de-Ipiña, Diego; Torres-Manzanera, Emilio

    2012-01-01

    This paper presents the CONCERT framework, a push/filter information consumption paradigm, based on a rule-based semantic contextual information system for tourism. CONCERT suggests a specific insight of the notion of context from a human mobility perspective. It focuses on the particular characteristics and requirements of travellers and addresses the drawbacks found in other approaches. Additionally, CONCERT suggests the use of digital broadcasting as push communication technology, whereby tourism information is disseminated to mobile devices. This information is then automatically filtered by a network of ontologies and offered to tourists on the screen. The results obtained in the experiments carried out show evidence that the information disseminated through digital broadcasting can be manipulated by the network of ontologies, providing contextualized information that produces user satisfaction.

  19. Construction of a clinical decision support system for undergoing surgery based on domain ontology and rules reasoning.

    PubMed

    Bau, Cho-Tsan; Chen, Rung-Ching; Huang, Chung-Yi

    2014-05-01

    To construct a clinical decision support system (CDSS) for undergoing surgery based on domain ontology and rules reasoning in the setting of hospitalized diabetic patients. The ontology was created with a modified ontology development method, including specification and conceptualization, formalization, implementation, and evaluation and maintenance. The Protégé-Web Ontology Language editor was used to implement the ontology. Embedded clinical knowledge was elicited to complement the domain ontology with formal concept analysis. The decision rules were translated into JENA format, which JENA can use to infer recommendations based on patient clinical situations. The ontology includes 31 classes and 13 properties, plus 38 JENA rules that were built to generate recommendations. The evaluation studies confirmed the correctness of the ontology, acceptance of recommendations, satisfaction with the system, and usefulness of the ontology for glycemic management of diabetic patients undergoing surgery, especially for domain experts. The contribution of this research is to set up an evidence-based hybrid ontology and an evaluation method for CDSS. The system can help clinicians to achieve inpatient glycemic control in diabetic patients undergoing surgery while avoiding hypoglycemia.

  20. Targeted training of the decision rule benefits rule-guided behavior in Parkinson's disease.

    PubMed

    Ell, Shawn W

    2013-12-01

    The impact of Parkinson's disease (PD) on rule-guided behavior has received considerable attention in cognitive neuroscience. The majority of research has used PD as a model of dysfunction in frontostriatal networks, but very few attempts have been made to investigate the possibility of adapting common experimental techniques in an effort to identify the conditions that are most likely to facilitate successful performance. The present study investigated a targeted training paradigm designed to facilitate rule learning and application using rule-based categorization as a model task. Participants received targeted training in which there was no selective-attention demand (i.e., stimuli varied along a single, relevant dimension) or nontargeted training in which there was selective-attention demand (i.e., stimuli varied along a relevant dimension as well as an irrelevant dimension). Following training, all participants were tested on a rule-based task with selective-attention demand. During the test phase, PD patients who received targeted training performed similarly to control participants and outperformed patients who did not receive targeted training. As a preliminary test of the generalizability of the benefit of targeted training, a subset of the PD patients were tested on the Wisconsin card sorting task (WCST). PD patients who received targeted training outperformed PD patients who did not receive targeted training on several WCST performance measures. These data further characterize the contribution of frontostriatal circuitry to rule-guided behavior. Importantly, these data also suggest that PD patient impairment, on selective-attention-demanding tasks of rule-guided behavior, is not inevitable and highlight the potential benefit of targeted training.

  1. Comprehensive gravitational modeling of the vertical cylindrical prism by Gauss-Legendre quadrature integration

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, M. F.; Hashemi, H.; von Frese, R. RB

    2018-01-01

    Forward modeling is the basis of gravitational anomaly inversion that is widely applied to map subsurface mass variations. This study uses numerical least-squares Gauss-Legendre quadrature (GLQ) integration to evaluate the gravitational potential, anomaly and gradient components of the vertical cylindrical prism element. These results, in turn, may be integrated to accurately model the complete gravitational effects of fluid bearing rock formations and other vertical cylinder-like geological bodies with arbitrary variations in shape and density. Comparing the GLQ gravitational effects of uniform density, vertical circular cylinders against the effects calculated by a number of other methods illustrates the veracity of the GLQ modeling method and the accuracy limitations of the other methods. Geological examples include modeling the gravitational effects of a formation washout to help map azimuthal variations of the formation's bulk densities around the borehole wall. As another application, the gravitational effects of a seismically and gravimetrically imaged salt dome within the Laurentian Basin are evaluated for the velocity, density and geometric properties of the Basin's sedimentary formations.

  2. Analytic and rule-based decision support tool for VDT workstation adjustment and computer accessories arrangement.

    PubMed

    Rurkhamet, Busagarin; Nanthavanij, Suebsak

    2004-12-01

    One important factor that leads to the development of musculoskeletal disorders (MSD) and cumulative trauma disorders (CTD) among visual display terminal (VDT) users is their work posture. While operating a VDT, a user's body posture is strongly influenced by the task, VDT workstation settings, and layout of computer accessories. This paper presents an analytic and rule-based decision support tool called EQ-DeX (an ergonomics and quantitative design expert system) that is developed to provide valid and practical recommendations regarding the adjustment of a VDT workstation and the arrangement of computer accessories. The paper explains the structure and components of EQ-DeX, input data, rules, and adjustment and arrangement algorithms. From input information such as gender, age, body height, task, etc., EQ-DeX uses analytic and rule-based algorithms to estimate quantitative settings of a computer table and a chair, as well as locations of computer accessories such as monitor, document holder, keyboard, and mouse. With the input and output screens that are designed using the concept of usability, the interactions between the user and EQ-DeX are convenient. Examples are also presented to demonstrate the recommendations generated by EQ-DeX.

  3. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  4. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  5. RuleML-Based Learning Object Interoperability on the Semantic Web

    ERIC Educational Resources Information Center

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  6. Rule-based spatial modeling with diffusing, geometrically constrained molecules.

    PubMed

    Gruenert, Gerd; Ibrahim, Bashar; Lenser, Thorsten; Lohel, Maiko; Hinze, Thomas; Dittrich, Peter

    2010-06-07

    We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid

  7. Rule-based spatial modeling with diffusing, geometrically constrained molecules

    PubMed Central

    2010-01-01

    Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like

  8. Rule Based Expert System for Monitoring Real Time Drug Supply in Hospital Using Radio Frequency Identification Technology

    NASA Astrophysics Data System (ADS)

    Driandanu, Galih; Surarso, Bayu; Suryono

    2018-02-01

    A radio frequency identification (RFID) has obtained increasing attention with the emergence of various applications. This study aims to examine the implementation of rule based expert system supported by RFID technology into a monitoring information system of drug supply in a hospital. This research facilitates in monitoring the real time drug supply by using data sample from the hospital pharmacy. This system able to identify and count the number of drug and provide warning and report in real time. the conclusion is the rule based expert system and RFID technology can facilitate the performance in monitoring the drug supply quickly and precisely.

  9. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    PubMed

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  10. Notification: EPA's Implementation and Enforcement of the Lead-Based Paint Renovation, Repair and Painting Rule

    EPA Pesticide Factsheets

    Project #OA&E-FY18-0162, March 28, 2018. The OIG plans to begin preliminary research to evaluate the EPA's implementation and enforcement of the Lead-Based Paint Renovation, Repair and Painting Rule (RRP).

  11. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the

  12. ConsPred: a rule-based (re-)annotation framework for prokaryotic genomes.

    PubMed

    Weinmaier, Thomas; Platzer, Alexander; Frank, Jeroen; Hellinger, Hans-Jörg; Tischler, Patrick; Rattei, Thomas

    2016-11-01

    The rapidly growing number of available prokaryotic genome sequences requires fully automated and high-quality software solutions for their initial and re-annotation. Here we present ConsPred, a prokaryotic genome annotation framework that performs intrinsic gene predictions, homology searches, predictions of non-coding genes as well as CRISPR repeats and integrates all evidence into a consensus annotation. ConsPred achieves comprehensive, high-quality annotations based on rules and priorities, similar to decision-making in manual curation and avoids conflicting predictions. Parameters controlling the annotation process are configurable by the user. ConsPred has been used in the institutions of the authors for longer than 5 years and can easily be extended and adapted to specific needs. The ConsPred algorithm for producing a consensus from the varying scores of multiple gene prediction programs approaches manual curation in accuracy. Its rule-based approach for choosing final predictions avoids overriding previous manual curations. ConsPred is implemented in Java, Perl and Shell and is freely available under the Creative Commons license as a stand-alone in-house pipeline or as an Amazon Machine Image for cloud computing, see https://sourceforge.net/projects/conspred/. thomas.rattei@univie.ac.atSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler.

    PubMed

    Butt, Muhammad Arif; Akram, Muhammad

    2016-01-01

    We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.

  14. Optimizing Cubature for Efficient Integration of Subspace Deformations

    PubMed Central

    An, Steven S.; Kim, Theodore; James, Doug L.

    2009-01-01

    We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777

  15. Design and evaluation of a GaAs MMIC X-band active RC quadrature power divider

    NASA Astrophysics Data System (ADS)

    Henkus, J. C.

    1991-03-01

    The design and evaluation of a GaAs MMIC (Microwave Monolithic Integrated Circuit) X-band active RC Quadrature Power Divider (QPD) is addressed. This QPD can be used as part of a vector modulator. The chosen QPD topology consists of two active first order RC all pass networks and was converted into an MMIC design. The design is completely symmetrical except for two key resistors. On-wafer S parameter measurements were carried out; a special probe head configuration was composed in order to avoid measurement accuracy degradation associated with the reversal of the active output of the QPD. The measured nominal RF behavior of the chips complies with the simulated behavior to a very high degree. The optical, DC, and RF yield is very large (97, 83, and 74 percent respectively). A modification to Takashi's all pass network was proposed which offers gain/frequency slope control and compensation ability.

  16. Rule groupings: A software engineering approach towards verification of expert systems

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala

    1991-01-01

    Currently, most expert system shells do not address software engineering issues for developing or maintaining expert systems. As a result, large expert systems tend to be incomprehensible, difficult to debug or modify and almost impossible to verify or validate. Partitioning rule based systems into rule groups which reflect the underlying subdomains of the problem should enhance the comprehensibility, maintainability, and reliability of expert system software. Attempts were made to semiautomatically structure a CLIPS rule base into groups of related rules that carry the same type of information. Different distance metrics that capture relevant information from the rules for grouping are discussed. Two clustering algorithms that partition the rule base into groups of related rules are given. Two independent evaluation criteria are developed to measure the effectiveness of the grouping strategies. Results of the experiment with three sample rule bases are presented.

  17. NAGWS Volleyball Rulebook, 1993. Official Rules & Interpretations/Officiating.

    ERIC Educational Resources Information Center

    1993

    The National Association for Girls and Women in Sport (NAGWS) Volleyball Rules are based on the United States Volleyball Rules, which in turn are adopted from the rules and interpretations of the International Volleyball Federation Rules. Following a foreword by Robertha Abney, NAGWS President, the publication is organized into six sections as…

  18. Long-Term Quadrature Light Variability in Early Type Interacting Binary Systems

    NASA Astrophysics Data System (ADS)

    Peters, Geraldine J.; Wilson, R. E.; Vaccaro, T. R.

    2014-01-01

    Four years of Kepler observations have revealed a phenomenon in the light curves of short-period Algol-type eclipsing binaries that has never been reported from ground-based photometry. These systems display unequal brightness at their quadrature phases that numerically reverses over a time scale of about 100-400 days. We call these systems L/T (leading hemisphere/ trailing hemisphere) variables. Twenty-one such systems have so far been identified in the Kepler database and at least three classes of L/T behavior have been identified. The prototype is WX Draconis (A8V + K0IV, P=1.80 d) which shows L/ T light variations of 2-3%. The primary is a delta Scuti star with a dominant pulsation period of 41 m. The Kepler light curves are being analyzed with the 2013 version of the Wilson-Devinney (WD) program that includes major improvements in modeling star spots (i.e. spot motions due to drift and stellar rotation and spot growth and decay). Preliminary analysis of the WX Dra data suggests that the L/T variability can be fit with either an accretion hot spot on the primary (T = 2.3 T_phot) that jumps in longitude or a magnetic cool spotted region on the secondary. If the latter model is correct the dark region must occupy at least 20% of the surface of the facing hemisphere of the secondary if it is completely black, or a larger area if not completely black. In both hot and cool spot scenarios magnetic fields must play a role in the activity. Echelle spectra were recently secured with the KPNO 4-m telescope to determine the mass ratios of the L/T systems and their spectral types. This information will allow us to assess whether the hot or cool spot model explains the L/T activity. Progress toward this goal will be presented. Support from NASA grants NNX11AC78G and NNX12AE44G and USC’s Women in Science and Engineering (WiSE) program is greatly appreciated.

  19. Transfer of Rule-Based Expertise through a Tutorial Dialogue

    DTIC Science & Technology

    1979-09-01

    be causing the infection (.2) [RULE633]. {The student asks, "Does the patient have a fever ?") " FEBRILE MYCIN never needed to inquire about whether...remaining clauses, some we classified most as restrictions, and the one or two that remained constituted the key factor(s) of the rule. The " petechial ...Infection is bacterial, KEY-FACTORt 4) Petechial is one of the types of rash which the patient has, RESTRICTIONS 5) Purpuric is not one of the types

  20. On the consistency between nearest-neighbor peridynamic discretizations and discretized classical elasticity models

    DOE PAGES

    Seleson, Pablo; Du, Qiang; Parks, Michael L.

    2016-08-16

    -neighbor discretizations should be avoided in peridynamic simulations involving cracks, such discretizations are viable, for example for verification or validation purposes, in problems characterized by smooth deformations. Furthermore, we demonstrate that better quadrature rules in peridynamics can be obtained based on the functional form of solutions.« less

  1. Biometric image enhancement using decision rule based image fusion techniques

    NASA Astrophysics Data System (ADS)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  2. Retinal hemorrhage detection by rule-based and machine learning approach.

    PubMed

    Di Xiao; Shuang Yu; Vignarajan, Janardhan; Dong An; Mei-Ling Tay-Kearney; Kanagasingam, Yogi

    2017-07-01

    Robust detection of hemorrhages (HMs) in color fundus image is important in an automatic diabetic retinopathy grading system. Detection of the hemorrhages that are close to or connected with retinal blood vessels was found to be challenge. However, most methods didn't put research on it, even some of them mentioned this issue. In this paper, we proposed a novel hemorrhage detection method based on rule-based and machine learning methods. We focused on the improvement of detection of the hemorrhages that are close to or connected with retinal blood vessels, besides detecting the independent hemorrhage regions. A preliminary test for detecting HM presence was conducted on the images from two databases. We achieved sensitivity and specificity of 93.3% and 88% as well as 91.9% and 85.6% on the two datasets.

  3. Context-Based Tourism Information Filtering with a Semantic Rule Engine

    PubMed Central

    Lamsfus, Carlos; Martin, David; Alzua-Sorzabal, Aurkene; López-de-Ipiña, Diego; Torres-Manzanera, Emilio

    2012-01-01

    This paper presents the CONCERT framework, a push/filter information consumption paradigm, based on a rule-based semantic contextual information system for tourism. CONCERT suggests a specific insight of the notion of context from a human mobility perspective. It focuses on the particular characteristics and requirements of travellers and addresses the drawbacks found in other approaches. Additionally, CONCERT suggests the use of digital broadcasting as push communication technology, whereby tourism information is disseminated to mobile devices. This information is then automatically filtered by a network of ontologies and offered to tourists on the screen. The results obtained in the experiments carried out show evidence that the information disseminated through digital broadcasting can be manipulated by the network of ontologies, providing contextualized information that produces user satisfaction. PMID:22778584

  4. The use of misclassification costs to learn rule-based decision support models for cost-effective hospital admission strategies.

    PubMed

    Ambrosino, R; Buchanan, B G; Cooper, G F; Fine, M J

    1995-01-01

    Cost-effective health care is at the forefront of today's important health-related issues. A research team at the University of Pittsburgh has been interested in lowering the cost of medical care by attempting to define a subset of patients with community-acquire pneumonia for whom outpatient therapy is appropriate and safe. Sensitivity and specificity requirements for this domain make it difficult to use rule-based learning algorithms with standard measures of performance based on accuracy. This paper describes the use of misclassification costs to assist a rule-based machine-learning program in deriving a decision-support aid for choosing outpatient therapy for patients with community-acquired pneumonia.

  5. Fuzzy rule-based image segmentation in dynamic MR images of the liver

    NASA Astrophysics Data System (ADS)

    Kobashi, Syoji; Hata, Yutaka; Tokimoto, Yasuhiro; Ishikawa, Makato

    2000-06-01

    This paper presents a fuzzy rule-based region growing method for segmenting two-dimensional (2-D) and three-dimensional (3- D) magnetic resonance (MR) images. The method is an extension of the conventional region growing method. The proposed method evaluates the growing criteria by using fuzzy inference techniques. The use of the fuzzy if-then rules is appropriate for describing the knowledge of the legions on the MR images. To evaluate the performance of the proposed method, it was applied to artificially generated images. In comparison with the conventional method, the proposed method shows high robustness for noisy images. The method then applied for segmenting the dynamic MR images of the liver. The dynamic MR imaging has been used for diagnosis of hepatocellular carcinoma (HCC), portal hypertension, and so on. Segmenting the liver, portal vein (PV), and inferior vena cava (IVC) can give useful description for the diagnosis, and is a basis work of a pres-surgery planning system and a virtual endoscope. To apply the proposed method, fuzzy if-then rules are derived from the time-density curve of ROIs. In the experimental results, the 2-D reconstructed and 3-D rendered images of the segmented liver, PV, and IVC are shown. The evaluation by a physician shows that the generated images are comparable to the hepatic anatomy, and they would be useful to understanding, diagnosis, and pre-surgery planning.

  6. SPARQL Query Re-writing Using Partonomy Based Transformation Rules

    NASA Astrophysics Data System (ADS)

    Jain, Prateek; Yeh, Peter Z.; Verma, Kunal; Henson, Cory A.; Sheth, Amit P.

    Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology's containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries.

  7. Perceptual learning improves adult amblyopic vision through rule-based cognitive compensation.

    PubMed

    Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A; Levi, Dennis M; Yu, Cong

    2014-04-01

    We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation.

  8. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  9. Construction of a Clinical Decision Support System for Undergoing Surgery Based on Domain Ontology and Rules Reasoning

    PubMed Central

    Bau, Cho-Tsan; Huang, Chung-Yi

    2014-01-01

    Abstract Objective: To construct a clinical decision support system (CDSS) for undergoing surgery based on domain ontology and rules reasoning in the setting of hospitalized diabetic patients. Materials and Methods: The ontology was created with a modified ontology development method, including specification and conceptualization, formalization, implementation, and evaluation and maintenance. The Protégé–Web Ontology Language editor was used to implement the ontology. Embedded clinical knowledge was elicited to complement the domain ontology with formal concept analysis. The decision rules were translated into JENA format, which JENA can use to infer recommendations based on patient clinical situations. Results: The ontology includes 31 classes and 13 properties, plus 38 JENA rules that were built to generate recommendations. The evaluation studies confirmed the correctness of the ontology, acceptance of recommendations, satisfaction with the system, and usefulness of the ontology for glycemic management of diabetic patients undergoing surgery, especially for domain experts. Conclusions: The contribution of this research is to set up an evidence-based hybrid ontology and an evaluation method for CDSS. The system can help clinicians to achieve inpatient glycemic control in diabetic patients undergoing surgery while avoiding hypoglycemia. PMID:24730353

  10. Advances in Optical Fiber-Based Faraday Rotation Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, A D; McHale, G B; Goerz, D A

    2009-07-27

    In the past two years, we have used optical fiber-based Faraday Rotation Diagnostics (FRDs) to measure pulsed currents on several dozen capacitively driven and explosively driven pulsed power experiments. We have made simplifications to the necessary hardware for quadrature-encoded polarization analysis, including development of an all-fiber analysis scheme. We have developed a numerical model that is useful for predicting and quantifying deviations from the ideal diagnostic response. We have developed a method of analyzing quadrature-encoded FRD data that is simple to perform and offers numerous advantages over several existing methods. When comparison has been possible, we have seen good agreementmore » with our FRDs and other current sensors.« less

  11. The Epistemology of a Rule-Based Expert System: A Framework for Explanation.

    DTIC Science & Technology

    1982-01-01

    Hypothesis e.coli cryptococcus "concluded by" 3 Rule Rule543 Rule535 predicates" 4 Hypothesis meningitis bacterial steroids a3coholic "more general" 5...the hypothesis "e.coll Is causing meningitis" before " cryptococcus is causing meningitis" Is strategic. And recalling an earlier example

  12. Conformance Testing: Measurement Decision Rules

    NASA Technical Reports Server (NTRS)

    Mimbs, Scott M.

    2010-01-01

    The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to provide assurance to the customer that end products meet specifications. Measuring devices, often called measuring and test equipment (MTE), are used to provide the evidence of product conformity to specified requirements. Unfortunately, processes that employ MTE can become a weak link to the overall QMS if proper attention is not given to the measurement process design, capability, and implementation. Documented "decision rules" establish the requirements to ensure measurement processes provide the measurement data that supports the needs of the QMS. Measurement data are used to make the decisions that impact all areas of technology. Whether measurements support research, design, production, or maintenance, ensuring the data supports the decision is crucial. Measurement data quality can be critical to the resulting consequences of measurement-based decisions. Historically, most industries required simplistic, one-size-fits-all decision rules for measurements. One-size-fits-all rules in some cases are not rigorous enough to provide adequate measurement results, while in other cases are overly conservative and too costly to implement. Ideally, decision rules should be rigorous enough to match the criticality of the parameter being measured, while being flexible enough to be cost effective. The goal of a decision rule is to ensure that measurement processes provide data with a sufficient level of quality to support the decisions being made - no more, no less. This paper discusses the basic concepts of providing measurement-based evidence that end products meet specifications. Although relevant to all measurement-based conformance tests, the target audience is the MTE end-user, which is anyone using MTE other than calibration service providers. Topics include measurement fundamentals, the associated decision risks, verifying conformance to specifications, and basic measurement

  13. Model-based Systems Engineering: Creation and Implementation of Model Validation Rules for MOS 2.0

    NASA Technical Reports Server (NTRS)

    Schmidt, Conrad K.

    2013-01-01

    Model-based Systems Engineering (MBSE) is an emerging modeling application that is used to enhance the system development process. MBSE allows for the centralization of project and system information that would otherwise be stored in extraneous locations, yielding better communication, expedited document generation and increased knowledge capture. Based on MBSE concepts and the employment of the Systems Modeling Language (SysML), extremely large and complex systems can be modeled from conceptual design through all system lifecycles. The Operations Revitalization Initiative (OpsRev) seeks to leverage MBSE to modernize the aging Advanced Multi-Mission Operations Systems (AMMOS) into the Mission Operations System 2.0 (MOS 2.0). The MOS 2.0 will be delivered in a series of conceptual and design models and documents built using the modeling tool MagicDraw. To ensure model completeness and cohesiveness, it is imperative that the MOS 2.0 models adhere to the specifications, patterns and profiles of the Mission Service Architecture Framework, thus leading to the use of validation rules. This paper outlines the process by which validation rules are identified, designed, implemented and tested. Ultimately, these rules provide the ability to maintain model correctness and synchronization in a simple, quick and effective manner, thus allowing the continuation of project and system progress.

  14. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    PubMed Central

    2010-01-01

    Background Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs. The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. Methods A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). Results The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. Conclusions The framework is an

  15. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines.

    PubMed

    Ongenae, Femke; De Backere, Femke; Steurbaut, Kristof; Colpaert, Kirsten; Kerckhove, Wannes; Decruyenaere, Johan; De Turck, Filip

    2010-01-18

    Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. The framework is an effective solution for computerizing

  16. Ionic force field optimization based on single-ion and ion-pair solvation properties: Going beyond standard mixing rules

    NASA Astrophysics Data System (ADS)

    Fyta, Maria; Netz, Roland R.

    2012-03-01

    Using molecular dynamics (MD) simulations in conjunction with the SPC/E water model, we optimize ionic force-field parameters for seven different halide and alkali ions, considering a total of eight ion-pairs. Our strategy is based on simultaneous optimizing single-ion and ion-pair properties, i.e., we first fix ion-water parameters based on single-ion solvation free energies, and in a second step determine the cation-anion interaction parameters (traditionally given by mixing or combination rules) based on the Kirkwood-Buff theory without modification of the ion-water interaction parameters. In doing so, we have introduced scaling factors for the cation-anion Lennard-Jones (LJ) interaction that quantify deviations from the standard mixing rules. For the rather size-symmetric salt solutions involving bromide and chloride ions, the standard mixing rules work fine. On the other hand, for the iodide and fluoride solutions, corresponding to the largest and smallest anion considered in this work, a rescaling of the mixing rules was necessary. For iodide, the experimental activities suggest more tightly bound ion pairing than given by the standard mixing rules, which is achieved in simulations by reducing the scaling factor of the cation-anion LJ energy. For fluoride, the situation is different and the simulations show too large attraction between fluoride and cations when compared with experimental data. For NaF, the situation can be rectified by increasing the cation-anion LJ energy. For KF, it proves necessary to increase the effective cation-anion Lennard-Jones diameter. The optimization strategy outlined in this work can be easily adapted to different kinds of ions.

  17. An algorithm for rule-in and rule-out of acute myocardial infarction using a novel troponin I assay.

    PubMed

    Lindahl, Bertil; Jernberg, Tomas; Badertscher, Patrick; Boeddinghaus, Jasper; Eggers, Kai M; Frick, Mats; Rubini Gimenez, Maria; Linder, Rickard; Ljung, Lina; Martinsson, Arne; Melki, Dina; Nestelberger, Thomas; Rentsch, Katharina; Reichlin, Tobias; Sabti, Zaid; Schubera, Marie; Svensson, Per; Twerenbold, Raphael; Wildi, Karin; Mueller, Christian

    2017-01-15

    To derive and validate a hybrid algorithm for rule-out and rule-in of acute myocardial infarction based on measurements at presentation and after 2 hours with a novel cardiac troponin I (cTnI) assay. The algorithm was derived and validated in two cohorts (605 and 592 patients) from multicentre studies enrolling chest pain patients presenting to the emergency department (ED) with onset of last episode within 12 hours. The index diagnosis and cardiovascular events up to 30 days were adjudicated by independent reviewers. In the validation cohort, 32.6% of the patients were ruled out on ED presentation, 6.1% were ruled in and 61.3% remained undetermined. A further 22% could be ruled out and 9.8% ruled in, after 2 hours. In total, 54.6% of the patients were ruled out with a negative predictive value (NPV) of 99.4% (95% CI 97.8% to 99.9%) and a sensitivity of 97.7% (95% CI 91.9% to 99.7%); 15.8% were ruled in with a positive predictive value (PPV) of 74.5% (95% CI 64.8% to 82.2%) and a specificity of 95.2% (95% CI 93.0% to 96.9%); and 29.6% remained undetermined after 2 hours. No patient in the rule-out group died during the 30-day follow-up in the two cohorts. This novel two-step algorithm based on cTnI measurements enabled just over a third of the patients with acute chest pain to be ruled in or ruled out already at presentation and an additional third after 2 hours. This strategy maximises the speed of rule-out and rule-in while maintaining a high NPV and PPV, respectively. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Ability-Grouping and Academic Inequality: Evidence from Rule-Based Student Assignments. NBER Working Paper No. 14911

    ERIC Educational Resources Information Center

    Jackson, C. Kirabo

    2009-01-01

    In Trinidad and Tobago students are assigned to secondary schools after fifth grade based on achievement tests, leading to large differences in the school environments to which students of differing initial levels of achievement are exposed. Using both a regression discontinuity design and rule-based instrumental variables to address…

  19. A Robust High-Performance GPS L1 Receiver with Single-stage Quadrature Redio-Frequency Circuit

    NASA Astrophysics Data System (ADS)

    Liu, Jianghua; Xu, Weilin; Wan, Qinq; Liu, Tianci

    2018-03-01

    A low power current reuse single-stage quadrature raido-frequency part (SQRF) is proposed for GPS L1 receiver in 180nm CMOS process. The proposed circuit consists of LNA, Mixer, QVCO, is called the QLMV cell. A two blocks stacked topology is adopted in this design. The parallel QVCO and mixer placed on the top forms the upper stacked block, and the LNA placed on the bottom forms the other stacked block. The two blocks share the current and achieve low power performance. To improve the stability, a float current source is proposed. The float current isolated the local oscillation signal and the input RF signal, which bring the whole circuit robust high-performance. The result shows conversion gain is 34 dB, noise figure is three dB, the phase noise is -110 dBc/Hz at 1MHz and IIP3 is -20 dBm. The proposed circuit dissipated 1.7mW with 1 V supply voltage.

  20. State Identification of Hoisting Motors Based on Association Rules for Quayside Container Crane

    NASA Astrophysics Data System (ADS)

    Li, Q. Z.; Gang, T.; Pan, H. Y.; Xiong, H.

    2017-07-01

    Quay container crane hoisting motor is a complex system, and the characteristics of long-term evolution and change of running status of there is a rule, and use it. Through association rules analysis, this paper introduced the similarity in association rules, and quay container crane hoisting motor status identification. Finally validated by an example, some rules change amplitude is small, regular monitoring, not easy to find, but it is precisely because of these small changes led to mechanical failure. Therefore, using the association rules change in monitoring the motor status has the very strong practical significance.

  1. A Novel Rules Based Approach for Estimating Software Birthmark

    PubMed Central

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  2. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  3. A rough set-based association rule approach implemented on a brand trust evaluation model

    NASA Astrophysics Data System (ADS)

    Liao, Shu-Hsien; Chen, Yin-Ju

    2017-09-01

    In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

  4. Determination of projection effects of CMEs using quadrature observations with the two STEREO spacecraft

    NASA Astrophysics Data System (ADS)

    Bronarska, K.; Michalek, G.

    2018-07-01

    Since 1995 coronal mass ejections (CMEs) have been routinely observed thanks to the sensitive Large Angle and Spectrometric Coronagraphs (LASCO) on board the Solar and Heliospheric Observatory (SOHO) mission. Their observed characteristics are stored, among other, in the SOHO/LASCO catalog. These parameters are commonly used in scientific studies. Unfortunately, coronagraphic observations of CMEs are subject to projection effects. This makes it practically impossible to determine the true properties of CMEs and therefore makes it more difficult to forecast their geoeffectiveness. In this study, using quadrature observations with the two Solar Terrestrial Relations Observatory (STEREO) spacecrafts, we estimate the projection effect affecting velocity of CMEs included in the SOHO/LASCO catalog. It was demonstrated that this effect depends significantly on width and source location of CMEs. It can be very significant for narrow events and originating from the disk center. The effect diminishes with increasing width and absolute longitude of source location of CMEs. For very wide (width ⩾ 250°) or limb events (| longitude ⩾ 70°) projection effects completely disappears.

  5. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method

    USGS Publications Warehouse

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-01-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.

  6. A rule-based shell to hierarchically organize HST observations

    NASA Technical Reports Server (NTRS)

    Bose, Ashim; Gerb, Andrew

    1995-01-01

    An observing program on the Hubble Space Telescope (HST) is described in terms of exposures that are obtained by one or more of the instruments onboard the HST. These exposures are organized into a hierarchy of structures for purposes of efficient scheduling of observations. The process by which exposures get organized into the higher-level structures is called merging. This process relies on rules to determine which observations can be 'merged' into the same higher level structure, and which cannot. The TRANSformation expert system converts proposals for astronomical observations with HST into detailed observing plans. The conversion process includes the task of merging. Within TRANS, we have implemented a declarative shell to facilitate merging. This shell offers the following features: (1) an easy way of specifying rules on when to merge and when not to merge, (2) a straightforward priority mechanism for resolving conflicts among rules, (3) an explanation facility for recording the merging history, (4) a report generating mechanism to help users understand the reasons for merging, and (5) a self-documenting mechanism that documents all the merging rules that have been defined in the shell, ordered by priority. The merging shell is implemented using an object-oriented paradigm in CLOS. It has been a part of operational TRANS (after extensive testing) since July 1993. It has fulfilled all performance expectations, and has considerably simplified the process of implementing new or changed requirements for merging. The users are pleased with its report-generating and self-documenting features.

  7. Perceptual Learning Improves Adult Amblyopic Vision Through Rule-Based Cognitive Compensation

    PubMed Central

    Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong

    2014-01-01

    Purpose. We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Methods. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Results. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). Conclusions. The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation. PMID:24550359

  8. Mechanisms of rule acquisition and rule following in inductive reasoning.

    PubMed

    Crescentini, Cristiano; Seyed-Allaei, Shima; De Pisapia, Nicola; Jovicich, Jorge; Amati, Daniele; Shallice, Tim

    2011-05-25

    Despite the recent interest in the neuroanatomy of inductive reasoning processes, the regional specificity within prefrontal cortex (PFC) for the different mechanisms involved in induction tasks remains to be determined. In this study, we used fMRI to investigate the contribution of PFC regions to rule acquisition (rule search and rule discovery) and rule following. Twenty-six healthy young adult participants were presented with a series of images of cards, each consisting of a set of circles numbered in sequence with one colored blue. Participants had to predict the position of the blue circle on the next card. The rules that had to be acquired pertained to the relationship among succeeding stimuli. Responses given by subjects were categorized in a series of phases either tapping rule acquisition (responses given up to and including rule discovery) or rule following (correct responses after rule acquisition). Mid-dorsolateral PFC (mid-DLPFC) was active during rule search and remained active until successful rule acquisition. By contrast, rule following was associated with activation in temporal, motor, and medial/anterior prefrontal cortex. Moreover, frontopolar cortex (FPC) was active throughout the rule acquisition and rule following phases before a rule became familiar. We attributed activation in mid-DLPFC to hypothesis generation and in FPC to integration of multiple separate inferences. The present study provides evidence that brain activation during inductive reasoning involves a complex network of frontal processes and that different subregions respond during rule acquisition and rule following phases.

  9. 50 CFR 424.16 - Proposed rules.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS SUBCHAPTER A LISTING ENDANGERED AND THREATENED SPECIES AND DESIGNATING CRITICAL HABITAT Revision of the Lists § 424.16 Proposed rules. (a) General. Based... any proposed rule to list, delist, or reclassify a species, or to designate or revise critical habitat...

  10. 18 CFR 385.104 - Rule of construction (Rule 104).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Rule of construction (Rule 104). 385.104 Section 385.104 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... Definitions § 385.104 Rule of construction (Rule 104). To the extent that the text of a rule is inconsistent...

  11. 18 CFR 385.104 - Rule of construction (Rule 104).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Rule of construction (Rule 104). 385.104 Section 385.104 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... Definitions § 385.104 Rule of construction (Rule 104). To the extent that the text of a rule is inconsistent...

  12. Phonological reduplication in sign language: Rules rule

    PubMed Central

    Berent, Iris; Dupuis, Amanda; Brentari, Diane

    2014-01-01

    Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL). As a case study, we examine reduplication (X→XX)—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such a rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating), and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task). The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal. PMID:24959158

  13. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  14. A new type of simplified fuzzy rule-based system

    NASA Astrophysics Data System (ADS)

    Angelov, Plamen; Yager, Ronald

    2012-02-01

    Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.

  15. 20 CFR 345.401 - General rule.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false General rule. 345.401 Section 345.401... EMPLOYERS' CONTRIBUTIONS AND CONTRIBUTION REPORTS Benefit Charging § 345.401 General rule. Effective January... the basis of a claim for benefits to that employee's base year employer's cumulative benefit balance...

  16. 77 FR 22200 - Rescission of Rules

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-13

    ... ``Identity Theft Rules,'' 16 CFR part 681, and its rules governing ``Disposal of Consumer Report Information...; Duties of Creditors Regarding Risk-Based Pricing, 16 CFR part 640; Duties of Users of Consumer Reports... collection, assembly, and use of consumer report information and provides the framework for the credit...

  17. 50 CFR 424.16 - Proposed rules.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS SUBCHAPTER A LISTING ENDANGERED AND THREATENED SPECIES AND DESIGNATING CRITICAL HABITAT Revision of the Lists § 424.16 Proposed rules. (a) General. Based...—(1) Notifications. In the case of any proposed rule to list, delist, or reclassify a species, or to...

  18. Education Based on the Rule of St. Benedict: Centuries-Old Rule Still Relevant Today.

    ERIC Educational Resources Information Center

    Kollar, Rene (OSB)

    2003-01-01

    Discusses how the concept of Catholic education dates to the early days of the church and how the rule of St. Benedict impacts present day Catholic schools. Gives a historical overview and explains how the era of St. Benedict is an important issue to consider in a secular and depersonalized world. (MZ)

  19. Transfer between local and global processing levels by pigeons (Columba livia) and humans (Homo sapiens) in exemplar- and rule-based categorization tasks.

    PubMed

    Aust, Ulrike; Braunöder, Elisabeth

    2015-02-01

    The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  20. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  1. High spatial resolution contrast-enhanced MR angiography of the supraaortic arteries using the quadrature body coil at 3.0T: a feasibility study.

    PubMed

    Willinek, Winfried A; Bayer, Thomas; Gieseke, Jürgen; von Falkenhausen, Marcus; Sommer, Torsten; Hoogeveen, Romhild; Wilhelm, Kai; Urbach, Horst; Schild, Hans H

    2007-03-01

    To examine whether the the increased signal-to-noise (S/N) available at 3.0T would permit the use of the quadrature body coil for high spatial resolution contrast-enhanced (CE) MR angiography (MRA), and whether the large FOV that was used in our routine 1.5T protocol would also be feasible at 3.0T. In a prospective study, 43 patients and five volunteers were examined on a clinical whole-body 3.0T MR unit (Intera, Philips Medical Systems, Best, The Netherlands) after institutional review board approval and informed consent. Three-dimensional CE MRA (T1 gradient echo-sequence with TR/TE = 5.7/1.93 msec.; acquisition time, 1:54 min.) using randomly segmented central k-space ordering (CENTRA) was acquired with the quadrature body coil, using over a FOV of 350 mm. A high-image matrix of 432x432 yielded a non-zero filled voxel size of 0.81 mm x 0.81 mm x 1.0 mm (0.66 mm(3)). For quantitative analysis, contrast ratios (CR) between vessels (S) and signal in surrounding tissue (ST) were calculated [(S-ST)/(S+ST)]. For qualitative analysis, image quality and presence of artifacts were rated by two radiologists in consensus on a five-point scale (1=excellent to 5=nondiagnostic). Digital subtraction angiography (DSA) served as the standard of reference in patients with vascular disease. In the five volunteers, 1.5T CE MRA using a phased array neurovascular coil was available for intraindividual comparison. 3.0T CE MRA was successfully performed in 48/48 subjects (100%). Mean CR+/- SD were 0.76 (139.30/182.42) and 0.87 (235.18/270.14) at 3.0T and 1.5T respectively . Mean image quality was 3.82+/-0.86. Intraindividual comparison between 1.5T and 3.0T CE MRA in the volunteers revealed no significant difference in image quality (4.2+/-0.74 vs 4.6+/-0.80; p>0.05). Vascular disease was correctly identified in 13/13 patients with DSA correlation. CE MRA of the supraaortic arteries is feasible at 3.0T using a large FOV of 350 mm. The signal gain at 3.0T enables high spatial resolution

  2. Medicaid program; state plan home and community-based services, 5-year period for waivers, provider payment reassignment, and home and community-based setting requirements for Community First Choice and home and community-based services (HCBS) waivers. Final rule.

    PubMed

    2014-01-16

    This final rule amends the Medicaid regulations to define and describe state plan section 1915(i) home and community-based services (HCBS) under the Social Security Act (the Act) amended by the Affordable Care Act. This rule offers states new flexibilities in providing necessary and appropriate services to elderly and disabled populations. This rule describes Medicaid coverage of the optional state plan benefit to furnish home and community based-services and draw federal matching funds. This rule also provides for a 5-year duration for certain demonstration projects or waivers at the discretion of the Secretary, when they provide medical assistance for individuals dually eligible for Medicaid and Medicare benefits, includes payment reassignment provisions because state Medicaid programs often operate as the primary or only payer for the class of practitioners that includes HCBS providers, and amends Medicaid regulations to provide home and community-based setting requirements related to the Affordable Care Act for Community First Choice State plan option. This final rule also makes several important changes to the regulations implementing Medicaid 1915(c) HCBS waivers.

  3. 78 FR 30967 - Cross-Border Security-Based Swap Activities; Re-Proposal of Regulation SBSR and Certain Rules and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-23

    ... context of the security-based swap dealer definition, for affiliated groups with a registered security... for Affiliated Groups with Registered Security-Based Swap Dealers); Rule 3a71-5 (Substituted... 13n-12 (Exemption from Requirements Governing Security-Based Swap Data Repositories for Certain Non-U...

  4. Ontology-based classification of remote sensing images using spectral rules

    NASA Astrophysics Data System (ADS)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  5. Generation of highly pure Schrödinger's cat states and real-time quadrature measurements via optical filtering

    NASA Astrophysics Data System (ADS)

    Asavanant, Warit; Nakashima, Kota; Shiozawa, Yu; Yoshikawa, Jun-Ichi; Furusawa, Akira

    2017-12-01

    Until now, Schr\\"odinger's cat states are generated by subtracting single photons from the whole bandwidth of squeezed vacua. However, it was pointed out recently that the achievable purities are limited in such method (J. Yoshikawa, W. Asavanant, and A. Furusawa, arXiv:1707.08146 [quant-ph] (2017)). In this paper, we used our new photon subtraction method with a narrowband filtering cavity and generated a highly pure Schr\\"odinger's cat state with the value of $-0.184$ at the origin of the Wigner function. To our knowledge, this is the highest value ever reported without any loss corrections. The temporal mode also becomes exponentially rising in our method, which allows us to make a real-time quadrature measurement on Schr\\"odinger's cat states, and we obtained the value of $-0.162$ at the origin of the Wigner function.

  6. Choice Rules and Accumulator Networks

    PubMed Central

    2015-01-01

    This article presents a preference accumulation model that can be used to implement a number of different multi-attribute heuristic choice rules, including the lexicographic rule, the majority of confirming dimensions (tallying) rule and the equal weights rule. The proposed model differs from existing accumulators in terms of attribute representation: Leakage and competition, typically applied only to preference accumulation, are also assumed to be involved in processing attribute values. This allows the model to perform a range of sophisticated attribute-wise comparisons, including comparisons that compute relative rank. The ability of a preference accumulation model composed of leaky competitive networks to mimic symbolic models of heuristic choice suggests that these 2 approaches are not incompatible, and that a unitary cognitive model of preferential choice, based on insights from both these approaches, may be feasible. PMID:28670592

  7. Relationship Between the Expansion Speed and Radial Speed of CMEs Confirmed Using Quadrature Observations from SOHO and STEREO

    NASA Technical Reports Server (NTRS)

    Gopalswamy, Nat; Makela, Pertti; Yashiro, Seiji

    2011-01-01

    It is difficult to measure the true speed of Earth-directed CMEs from a coronagraph along the Sun-Earth line because of the occulting disk. However, the expansion speed (the speed with which the CME appears to spread in the sky plane) can be measured by such coronagraph. In order to convert the expansion speed to radial speed (which is important for space weather applications) one can use empirical relationship between the two that assumes an average width for all CMEs. If we have the width information from quadrature observations, we can confirm the relationship between expansion and radial speeds derived by Gopalswamy et al. (2009, CEAB, 33, 115,2009). The STEREO spacecraft were in quadrature with SOHO (STEREO-A ahead of Earth by 87 and STEREO-B 94 behind Earth) on 2011 February 15, when a fast Earth-directed CME occurred. The CME was observed as a halo by the Large-Angle and Spectrometric Coronagraph (LASCO) on board SOHO. The sky-plane speed was measured by SOHO/LASCO as the expansion speed, while the radial speed was measured by STEREO-A and STEREO-B. In addition, STEREO-A and STEREO-B images measured the width of the CME, which is unknown from Earth view. From the SOHO and STEREO measurements, we confirm the relationship between the expansion speed (Vexp ) and radial speed (Vrad ) derived previously from geometrical considerations (Gopalswamy et al. 2009): Vrad = 1/2 (1 + cot w) Vexp, where w is the half width of the CME. STEREO-B images of the CME, we found that CME had a full width of 75 degrees, so w = 37.5 degrees. This gives the relation as Vrad = 1.15 Vexp. From LASCO observations, we measured Vexp = 897 km/s, so we get the radial speed as 1033 km/s. Direct measurement of radial speed from STEREO gives 945 km/s (STEREO-A) and 1057 km/s (STEREO-B). These numbers are different only by 2.3% and 8.5% (for STEREO-A and STEREO-B, respectively) from the computed value.

  8. A personalized health-monitoring system for elderly by combining rules and case-based reasoning.

    PubMed

    Ahmed, Mobyen Uddin

    2015-01-01

    Health-monitoring system for elderly in home environment is a promising solution to provide efficient medical services that increasingly interest by the researchers within this area. It is often more challenging when the system is self-served and functioning as personalized provision. This paper proposed a personalized self-served health-monitoring system for elderly in home environment by combining general rules with a case-based reasoning approach. Here, the system generates feedback, recommendation and alarm in a personalized manner based on elderly's medical information and health parameters such as blood pressure, blood glucose, weight, activity, pulse, etc. A set of general rules has used to classify individual health parameters. The case-based reasoning approach is used to combine all different health parameters, which generates an overall classification of health condition. According to the evaluation result considering 323 cases and k=2 i.e., top 2 most similar retrieved cases, the sensitivity, specificity and overall accuracy are achieved as 90%, 97% and 96% respectively. The preliminary result of the system is acceptable since the feedback; recommendation and alarm messages are personalized and differ from the general messages. Thus, this approach could be possibly adapted for other situations in personalized elderly monitoring.

  9. 4 CFR 22.1 - Applicability of Rules [Rule 1].

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 4 Accounts 1 2010-01-01 2010-01-01 false Applicability of Rules [Rule 1]. 22.1 Section 22.1... ACCOUNTABILITY OFFICE CONTRACT APPEALS BOARD § 22.1 Applicability of Rules [Rule 1]. The Government... all appeals filed with the Board on or after October 1, 2007. ...

  10. Information Based Numerical Practice.

    DTIC Science & Technology

    1987-02-01

    characterization by comparative computational studies of various benchmark problems. See e.g. [MacNeal, Harder (1985)], [Robinson, Blackham (1981)] any...FOR NONADAPTIVE METHODS 2.1. THE QUADRATURE FORMULA The simplest example studied in detail in the literature is the problem of the optimal quadrature...formulae and the functional analytic prerequisites for the study of optimal formulae, we refer to the large monography (808 p) of [Sobolev (1974)]. Let us

  11. 26 CFR 1.1502-36 - Unified loss rule.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... If stock of more than one subsidiary is transferred in the transaction, the election may be made with...) INCOME TAXES Basis, Stock Ownership, and Earnings and Profits Rules § 1.1502-36 Unified loss rule. (a) In general—(1) Scope. This section provides rules for adjusting members' bases in stock of a subsidiary (S...

  12. Improving the anesthetic process by a fuzzy rule based medical decision system.

    PubMed

    Mendez, Juan Albino; Leon, Ana; Marrero, Ayoze; Gonzalez-Cava, Jose M; Reboso, Jose Antonio; Estevez, Jose Ignacio; Gomez-Gonzalez, José F

    2018-01-01

    The main objective of this research is the design and implementation of a new fuzzy logic tool for automatic drug delivery in patients undergoing general anesthesia. The aim is to adjust the drug dose to the real patient needs using heuristic knowledge provided by clinicians. A two-level computer decision system is proposed. The idea is to release the clinician from routine tasks so that he can focus on other variables of the patient. The controller uses the Bispectral Index (BIS) to assess the hypnotic state of the patient. Fuzzy controller was included in a closed-loop system to reach the BIS target and reject disturbances. BIS was measured using a BIS VISTA monitor, a device capable of calculating the hypnosis level of the patient through EEG information. An infusion pump with propofol 1% is used to supply the drug to the patient. The inputs to the fuzzy inference system are BIS error and BIS rate. The output is infusion rate increment. The mapping of the input information and the appropriate output is given by a rule-base based on knowledge of clinicians. To evaluate the performance of the fuzzy closed-loop system proposed, an observational study was carried out. Eighty one patients scheduled for ambulatory surgery were randomly distributed in 2 groups: one group using a fuzzy logic based closed-loop system (FCL) to automate the administration of propofol (42 cases); the second group using manual delivering of the drug (39 cases). In both groups, the BIS target was 50. The FCL, designed with intuitive logic rules based on the clinician experience, performed satisfactorily and outperformed the manual administration in patients in terms of accuracy through the maintenance stage. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. 18 CFR 385.103 - References to rules (Rule 103).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false References to rules (Rule 103). 385.103 Section 385.103 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... Definitions § 385.103 References to rules (Rule 103). This part cross-references its sections according to...

  14. 18 CFR 385.103 - References to rules (Rule 103).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false References to rules (Rule 103). 385.103 Section 385.103 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... Definitions § 385.103 References to rules (Rule 103). This part cross-references its sections according to...

  15. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  16. Equations for Scoring Rules When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document presents equations for scoring rules in a diagnostic and/or prognostic artificial-intelligence software system of the rule-based inference-engine type. The equations define a set of metrics that characterize the evaluation of a rule when data required for the antecedence clause(s) of the rule are missing. The metrics include a primary measure denoted the rule completeness metric (RCM) plus a number of subsidiary measures that contribute to the RCM. The RCM is derived from an analysis of a rule with respect to its truth and a measure of the completeness of its input data. The derivation is such that the truth value of an antecedent is independent of the measure of its completeness. The RCM can be used to compare the degree of completeness of two or more rules with respect to a given set of data. Hence, the RCM can be used as a guide to choosing among rules during the rule-selection phase of operation of the artificial-intelligence system..

  17. Healthcare provider perceptions of clinical prediction rules

    PubMed Central

    Richardson, Safiya; Khan, Sundas; McCullagh, Lauren; Kline, Myriam; Mann, Devin; McGinn, Thomas

    2015-01-01

    Objectives To examine internal medicine and emergency medicine healthcare provider perceptions of usefulness of specific clinical prediction rules. Setting The study took place in two academic medical centres. A web-based survey was distributed and completed by participants between 1 January and 31 May 2013. Participants Medical doctors, doctors of osteopathy or nurse practitioners employed in the internal medicine or emergency medicine departments at either institution. Primary and secondary outcome measures The primary outcome was to identify the clinical prediction rules perceived as most useful by healthcare providers specialising in internal medicine and emergency medicine. Secondary outcomes included comparing usefulness scores of specific clinical prediction rules based on provider specialty, and evaluating associations between usefulness scores and perceived characteristics of these clinical prediction rules. Results Of the 401 healthcare providers asked to participate, a total of 263 (66%), completed the survey. The CHADS2 score was chosen by most internal medicine providers (72%), and Pulmonary Embolism Rule-Out Criteria (PERC) score by most emergency medicine providers (45%), as one of the top three most useful from a list of 24 clinical prediction rules. Emergency medicine providers rated their top three significantly more positively, compared with internal medicine providers, as having a better fit into their workflow (p=0.004), helping more with decision-making (p=0.037), better fitting into their thought process when diagnosing patients (p=0.001) and overall, on a 10-point scale, more useful (p=0.009). For all providers, the perceived qualities of useful at point of care, helps with decision making, saves time diagnosing, fits into thought process, and should be the standard of clinical care correlated highly (≥0.65) with overall 10-point usefulness scores. Conclusions Healthcare providers describe clear preferences for certain clinical prediction

  18. A Flexible Mechanism of Rule Selection Enables Rapid Feature-Based Reinforcement Learning

    PubMed Central

    Balcarras, Matthew; Womelsdorf, Thilo

    2016-01-01

    Learning in a new environment is influenced by prior learning and experience. Correctly applying a rule that maps a context to stimuli, actions, and outcomes enables faster learning and better outcomes compared to relying on strategies for learning that are ignorant of task structure. However, it is often difficult to know when and how to apply learned rules in new contexts. In our study we explored how subjects employ different strategies for learning the relationship between stimulus features and positive outcomes in a probabilistic task context. We test the hypothesis that task naive subjects will show enhanced learning of feature specific reward associations by switching to the use of an abstract rule that associates stimuli by feature type and restricts selections to that dimension. To test this hypothesis we designed a decision making task where subjects receive probabilistic feedback following choices between pairs of stimuli. In the task, trials are grouped in two contexts by blocks, where in one type of block there is no unique relationship between a specific feature dimension (stimulus shape or color) and positive outcomes, and following an un-cued transition, alternating blocks have outcomes that are linked to either stimulus shape or color. Two-thirds of subjects (n = 22/32) exhibited behavior that was best fit by a hierarchical feature-rule model. Supporting the prediction of the model mechanism these subjects showed significantly enhanced performance in feature-reward blocks, and rapidly switched their choice strategy to using abstract feature rules when reward contingencies changed. Choice behavior of other subjects (n = 10/32) was fit by a range of alternative reinforcement learning models representing strategies that do not benefit from applying previously learned rules. In summary, these results show that untrained subjects are capable of flexibly shifting between behavioral rules by leveraging simple model-free reinforcement learning and context

  19. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  20. Aqui y Alla (Here and There) Information-Based Learning Corridors between Tennessee and Puerto Rico: The Five Golden Rules in Intercultural Education

    ERIC Educational Resources Information Center

    Mehra, Bharat; Allard, Suzie; Qayyum, M. Asim; Barclay-McLaughlin, Gina

    2008-01-01

    This article proposes five information-based Golden Rules in intercultural education that represent a holistic approach to creating learning corridors across geographically dispersed academic communities. The Golden Rules are generated through qualitative analysis, grounded theory application, reflective practice, and critical research to…