Science.gov

Sample records for insulina humana regular

  1. Humanae Vitae and education for chastity.

    PubMed

    Lopez Trujillo, A

    1994-01-01

    In order to teach children to live chaste lives, their parents must also be living in accordance with the tenets expressed in the Pope's "Humanae Vitae." Periodic continence from sexual intercourse affords tranquility and peace to married life and allows parents to have a deeper influence on the education of their children. Living according to the encyclical promotes a reverence for life; it enhances communication in the family and allows children to witness a stable marriage. In addition to supporting the vocation to a stable marriage, the encyclical also promotes the ability of the family to nourish religious vocations in the children. The encyclical also reiterates the rights of parents to provide sex education to their children at home instead of surrendering this right to other institutions. Sex education (or, preferably, education for chastity) must be positive (and not reduce everything to biology), prudent and clear (holding moral values), and delicate (rather than explicit). By mid- to late-adolescence, young people should know that natural ways of spacing children are available, but they do not need to have the extensive counseling afforded to engaged couples. "Humanae Vitae" presents a bridge across the generations in family life which provides hope for the future.

  2. Illusory Liberalism in "Atlas de Geografía Humana"

    ERIC Educational Resources Information Center

    Ryan, Lorraine

    2014-01-01

    "Atlas de Geografía Humana" constitutes a critique of the much vaunted notion of a progressive Spain that has rectified the gender inequalities of the Francoist era, as one of the highly educated and successful protagonists, Fran, unwittingly adopts her mother's alignment with patriarchal norms. This novel elucidates the…

  3. Dimensional regularization is generic

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo

    2016-09-01

    The absence of the quadratic divergence in the Higgs sector of the Standard Model in the dimensional regularization is usually regarded to be an exceptional property of a specific regularization. To understand what is going on in the dimensional regularization, we illustrate how to reproduce the results of the dimensional regularization for the λϕ4 theory in the more conventional regularization such as the higher derivative regularization; the basic postulate involved is that the quadratically divergent induced mass, which is independent of the scale change of the physical mass, is kinematical and unphysical. This is consistent with the derivation of the Callan-Symanzik equation, which is a comparison of two theories with slightly different masses, for the λϕ4 theory without encountering the quadratic divergence. In this sense the dimensional regularization may be said to be generic in a bottom-up approach starting with a successful low energy theory. We also define a modified version of the mass independent renormalization for a scalar field which leads to the homogeneous renormalization group equation. Implications of the present analysis on the Standard Model at high energies and the presence or absence of SUSY at LHC energies are briefly discussed.

  4. Regular gravitational lagrangians

    NASA Astrophysics Data System (ADS)

    Dragon, Norbert

    1992-02-01

    The Einstein action with vanishing cosmological constant is for appropriate field content the unique local action which is regular at the fixed point of affine coordinate transformations. Imposing this regularity requirement one excludes also Wess-Zumino counterterms which trade gravitational anomalies for Lorentz anomalies. One has to expect dilatational and SL (D) anomalies. If these anomalies are absent and if the regularity of the quantum vertex functional can be controlled then Einstein gravity is renormalizable. On leave of absence from Institut für Theoretische Physik, Universität Hannover, W-3000 Hannover 1, FRG.

  5. Regular phantom black holes.

    PubMed

    Bronnikov, K A; Fabris, J C

    2006-06-30

    We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.

  6. Seeking a Regularity.

    ERIC Educational Resources Information Center

    Sokol, William

    This autoinstructional unit deals with the phenomena of regularity in chemical behavior. The prerequisites suggested are two other autoinstructional lessons (Experiments 1 and 2) identified in the Del Mod System as SE 018 020 and SE 018 023. The equipment needed is listed and 45 minutes is the suggested time allotment. The Student Guide includes…

  7. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  8. The impact of 25 years of "Humanae Vitae".

    PubMed

    1993-08-01

    In 1968, Pope Paul VI reinforced the Catholic Church's position forbidding contraception in an encyclical known as "Humanae Vitae." The current Pope, John Paul II, has reiterated this position and stated that use of condoms is forbidden, even to prevent HIV transmission. Several public figures, including the IPPF President and the British Overseas Development Minister, have sought to initiate a dialogue with the Pope to express concern about population growth in developing countries. The Catholic Church is currently opposing efforts on the part of the Presidents of Peru and the Philippines to expand access to family planning (FP) programs. Many of the 500,000 women who die each year of pregnancy complications would have used contraceptives and lived, and an estimated 300 million developing world couples have no access to modern contraception, yet desire no more children. Many Catholics disagree with the Church's stance; 87% of US Catholics surveyed in 1992 stated that FP is the couple's choice. After 25 years of "Humanae Vitae," it is time for the Church to enter into dialogue and join efforts to improve life on earth in the next century. PMID:12345157

  9. Regularized versus non-regularized statistical reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Denisova, N. V.

    2011-08-01

    An important feature of positron emission tomography (PET) and single photon emission computer tomography (SPECT) is the stochastic property of real clinical data. Statistical algorithms such as ordered subset-expectation maximization (OSEM) and maximum a posteriori (MAP) are a direct consequence of the stochastic nature of the data. The principal difference between these two algorithms is that OSEM is a non-regularized approach, while the MAP is a regularized algorithm. From the theoretical point of view, reconstruction problems belong to the class of ill-posed problems and should be considered using regularization. Regularization introduces an additional unknown regularization parameter into the reconstruction procedure as compared with non-regularized algorithms. However, a comparison of non-regularized OSEM and regularized MAP algorithms with fixed regularization parameters has shown very minor difference between reconstructions. This problem is analyzed in the present paper. To improve the reconstruction quality, a method of local regularization is proposed based on the spatially adaptive regularization parameter. The MAP algorithm with local regularization was tested in reconstruction of the Hoffman brain phantom.

  10. 75 FR 70703 - Humana Insurance Company a Division of Carenetwork, Inc. Front End Operations and Account...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... Employment and Training Administration Humana Insurance Company a Division of Carenetwork, Inc. Front End... of Humana Insurance Company, a Division of CareNetwork, Inc., Front End Operations and Account... Reconsideration. The Department's Notice was published in the Federal Register on September 21, 2010 (75 FR...

  11. Mainstreaming the Regular Classroom Student.

    ERIC Educational Resources Information Center

    Kahn, Michael

    The paper presents activities, suggested by regular classroom teachers, to help prepare the regular classroom student for mainstreaming. The author points out that regular classroom children need a vehicle in which curiosity, concern, interest, fear, attitudes and feelings can be fully explored, where prejudices can be dispelled, and where the…

  12. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  13. Regularization in radio tomographic imaging

    NASA Astrophysics Data System (ADS)

    Sundaram, Ramakrishnan; Martin, Richard; Anderson, Christopher

    2013-05-01

    This paper demonstrates methods to select and apply regularization to the linear least-squares model formulation of the radio tomographic imaging (RTI) problem. Typically, the RTI inverse problem of image reconstruction is ill-conditioned due to the extremely small singular values of the weight matrix which relates the link signal strengths to the voxel locations of the obstruction. Regularization is included to offset the non-invertible nature of the weight matrix by adding a regularization term such as the matrix approximation of derivatives in each dimension based on the difference operator. This operation yields a smooth least-squares solution for the measured data by suppressing the high energy or noise terms in the derivative of the image. Traditionally, a scalar weighting factor of the regularization matrix is identified by trial and error (adhoc) to yield the best fit of the solution to the data without either excessive smoothing or ringing oscillations at the boundaries of the obstruction. This paper proposes new scalar and vector regularization methods that are automatically computed based on the weight matrix. Evidence of the effectiveness of these methods compared to the preset scalar regularization method is presented for stationary and moving obstructions in an RTI wireless sensor network. The variation of the mean square reconstruction error as a function of the scalar regularization is calculated for known obstructions in the network. The vector regularization procedure based on selective updates to the singular values of the weight matrix attains the lowest mean square error.

  14. Regularly timed events amid chaos

    NASA Astrophysics Data System (ADS)

    Blakely, Jonathan N.; Cooper, Roy M.; Corron, Ned J.

    2015-11-01

    We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events.

  15. Nonconvex Regularization in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Tuia, Devis; Flamary, Remi; Barlaud, Michel

    2016-11-01

    In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.

  16. Rotating regular black hole solution

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    Based on the Newman-Janis algorithm, the Ayón-Beato-García spacetime metric [Phys. Rev. Lett. 80, 5056 (1998)] of the regular spherically symmetric, static, and charged black hole has been converted into rotational form. It is shown that the derived solution for rotating a regular black hole is regular and the critical value of the electric charge for which two horizons merge into one sufficiently decreases in the presence of the nonvanishing rotation parameter a of the black hole.

  17. NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION

    SciTech Connect

    CHARTRAND, RICK

    2007-01-16

    The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.

  18. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  19. Geometric continuum regularization of quantum field theory

    SciTech Connect

    Halpern, M.B. . Dept. of Physics)

    1989-11-08

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.

  20. Boundary Regularity in Variational Problems

    NASA Astrophysics Data System (ADS)

    Kristensen, Jan; Mingione, Giuseppe

    2010-11-01

    We prove that, if {u : Ω subset mathbb{R}^n to mathbb{R}^N} is a solution to the Dirichlet variational problem mathop minlimitswint_{Ω} F(x, w, Dw) dx quad {subject to} quad w equiv u_0 onpartial Ω, involving a regular boundary datum ( u 0, ∂Ω) and a regular integrand F( x, w, Dw) strongly convex in Dw and satisfying suitable growth conditions, then {{mathcal H}^{n-1}} -almost every boundary point is regular for u in the sense that Du is Hölder continuous in a relative neighborhood of the point. The existence of even one such regular boundary point was previously not known except for some very special cases treated by J ost & M eier (Math Ann 262:549-561, 1983). Our results are consequences of new up-to-the-boundary higher differentiability results that we establish for minima of the functionals in question. The methods also allow us to improve the known boundary regularity results for solutions to non-linear elliptic systems, and, in some cases, to improve the known interior singular sets estimates for minimizers. Moreover, our approach allows for a treatment of systems and functionals with “rough” coefficients belonging to suitable Sobolev spaces of fractional order.

  1. Dimensional regularization in configuration space

    SciTech Connect

    Bollini, C.G. |; Giambiagi, J.J.

    1996-05-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}

  2. Regularization Analysis of SAR Superresolution

    SciTech Connect

    DELAURENTIS,JOHN M.; DICKEY,FRED M.

    2002-04-01

    Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. In a previous report the application of the concept to synthetic aperture radar was investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. This work treats the problem from the standpoint of regularization. Both the operator inversion approach and the regularization approach show that the ability to superresolve SAR imagery is severely limited by system noise.

  3. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  4. Regular languages, regular grammars and automata in splicing systems

    NASA Astrophysics Data System (ADS)

    Mohamad Jan, Nurhidaya; Fong, Wan Heng; Sarmin, Nor Haniza

    2013-04-01

    Splicing system is known as a mathematical model that initiates the connection between the study of DNA molecules and formal language theory. In splicing systems, languages called splicing languages refer to the set of double-stranded DNA molecules that may arise from an initial set of DNA molecules in the presence of restriction enzymes and ligase. In this paper, some splicing languages resulted from their respective splicing systems are shown. Since all splicing languages are regular, languages which result from the splicing systems can be further investigated using grammars and automata in the field of formal language theory. The splicing language can be written in the form of regular languages generated by grammar. Besides that, splicing systems can be accepted by automata. In this research, two restriction enzymes are used in splicing systems namely BfuCI and NcoI.

  5. Distributional Stress Regularity: A Corpus Study

    ERIC Educational Resources Information Center

    Temperley, David

    2009-01-01

    The regularity of stress patterns in a language depends on "distributional stress regularity", which arises from the pattern of stressed and unstressed syllables, and "durational stress regularity", which arises from the timing of syllables. Here we focus on distributional regularity, which depends on three factors. "Lexical stress patterning"…

  6. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  7. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  8. Statistical regularities reduce perceived numerosity.

    PubMed

    Zhao, Jiaying; Yu, Ru Qi

    2016-01-01

    Numerical information can be perceived at multiple levels (e.g., one bird, or a flock of birds). The level of input has typically been defined by explicit grouping cues, such as contours or connecting lines. Here we examine how regularities of object co-occurrences shape numerosity perception in the absence of explicit grouping cues. Participants estimated the number of colored circles in an array. We found that estimates were lower in arrays containing colors that consistently appeared next to each other across the experiment, even though participants were not explicitly aware of the color pairs (Experiments 1a and 1b). To provide support for grouping, we introduced color duplicates and found that estimates were lower in arrays with two identical colors (Experiment 2). The underestimation could not be explained by increased attention to individual objects (Experiment 3). These results suggest that statistical regularities reduce perceived numerosity consistent with a grouping mechanism. PMID:26451701

  9. Humana looks to ISO registration to address quality improvement and customer satisfaction.

    PubMed

    2003-01-01

    Seeking new ways to improve standardization of clinical operations and customer focus, Louisville, KY-based Humana, Inc. announced in November that it has become the first healthcare company to be registered in the U.S. under ISO 9001:2000, a quality management standard published by the International Organization for Standardization (ISO).

  10. Deep learning regularized Fisher mappings.

    PubMed

    Wong, W K; Sun, Mingming

    2011-10-01

    For classification tasks, it is always desirable to extract features that are most effective for preserving class separability. In this brief, we propose a new feature extraction method called regularized deep Fisher mapping (RDFM), which learns an explicit mapping from the sample space to the feature space using a deep neural network to enhance the separability of features according to the Fisher criterion. Compared to kernel methods, the deep neural network is a deep and nonlocal learning architecture, and therefore exhibits more powerful ability to learn the nature of highly variable datasets from fewer samples. To eliminate the side effects of overfitting brought about by the large capacity of powerful learners, regularizers are applied in the learning procedure of RDFM. RDFM is evaluated in various types of datasets, and the results reveal that it is necessary to apply unsupervised regularization in the fine-tuning phase of deep learning. Thus, for very flexible models, the optimal Fisher feature extractor may be a balance between discriminative ability and descriptive ability.

  11. New Two-Body Regularization

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2007-01-01

    We present a new scheme to regularize a three-dimensional two-body problem under perturbations. It is a combination of Sundman's time transformation and Levi-Civita's spatial coordinate transformation applied to the two-dimensional components of the position and velocity vectors in the osculating orbital plane. We adopt a coordinate triad specifying the plane as a function of the orbital angular momentum vector only. Since the magnitude of the orbital angular momentum is explicitly computed from the in-the-plane components of the position and velocity vectors, only two components of the orbital angular momentum vector are to be determined. In addition to these, we select the total energy of the two-body system and the physical time as additional components of the new variables. The equations of motion of the new variables have no singularity even when the mutual distance is extremely small, and therefore, the new variables are suitable to deal with close encounters. As a result, the number of dependent variables in the new scheme becomes eight, which is significantly smaller than the existing schemes to avoid close encounters: two less than the Kustaanheimo-Stiefel and the Bürdet-Ferrandiz regularizations, and five less than the Sperling-Bürdet/Bürdet-Heggie regularization.

  12. Knowledge and regularity in planning

    NASA Technical Reports Server (NTRS)

    Allen, John A.; Langley, Pat; Matwin, Stan

    1992-01-01

    The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.

  13. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  14. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  15. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  16. Natural frequency of regular basins

    NASA Astrophysics Data System (ADS)

    Tjandra, Sugih S.; Pudjaprasetya, S. R.

    2014-03-01

    Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.

  17. Regularized degenerate multi-solitons

    NASA Astrophysics Data System (ADS)

    Correa, Francisco; Fring, Andreas

    2016-09-01

    We report complex PT-symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.

  18. Regularized degenerate multi-solitons

    NASA Astrophysics Data System (ADS)

    Correa, Francisco; Fring, Andreas

    2016-09-01

    We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.

  19. Bayesian regularization of neural networks.

    PubMed

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  20. Stochastic regularization operators on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan

    2016-04-01

    Most geophysical inverse problems require the solution of underdetermined systems of equations. In order to solve such inverse problems, appropriate regularization is required. Ideally, this regularization includes information on the expected model variability and spatial correlation. Based on geostatistical covariance functions, which can be adapted to the specific situation, stochastic regularization can be used to add auxiliary constraints to the given inverse problem. Stochastic regularization operators have been successfully applied to geophysical inverse problems formulated on regular grids. Here, we demonstrate the calculation of stochastic regularization operators for unstructured meshes. Unstructured meshes are advantageous with regards to incorporating arbitrary topography, undulating geological interfaces and complex acquisition geometries into the inversion. However, compared to regular grids, unstructured meshes have variable cell sizes, complicating the calculation of stochastic operators. The stochastic operators proposed here are based on a 2D exponential correlation function, allowing to predefine spatial correlation lengths. The regularization thus acts over an imposed correlation length rather than only taking into account neighbouring cells as in regular smoothing constraints. Correlation over a spatial length partly removes the effects of variable cell sizes of unstructured meshes on the regularization. Synthetic models having large-scale interfaces as well as small-scale stochastic variations are used to analyse the performance and behaviour of the stochastic regularization operators. The resulting inverted models obtained with stochastic regularization are compare against the results of standard regularization approaches (damping and smoothing). Besides using stochastic operators for regularization, we plan to incorporate the footprint of the stochastic operator in further applications such as the calculation of the cross-gradient functions

  1. Learning regularized LDA by clustering.

    PubMed

    Pang, Yanwei; Wang, Shuang; Yuan, Yuan

    2014-12-01

    As a supervised dimensionality reduction technique, linear discriminant analysis has a serious overfitting problem when the number of training samples per class is small. The main reason is that the between- and within-class scatter matrices computed from the limited number of training samples deviate greatly from the underlying ones. To overcome the problem without increasing the number of training samples, we propose making use of the structure of the given training data to regularize the between- and within-class scatter matrices by between- and within-cluster scatter matrices, respectively, and simultaneously. The within- and between-cluster matrices are computed from unsupervised clustered data. The within-cluster scatter matrix contributes to encoding the possible variations in intraclasses and the between-cluster scatter matrix is useful for separating extra classes. The contributions are inversely proportional to the number of training samples per class. The advantages of the proposed method become more remarkable as the number of training samples per class decreases. Experimental results on the AR and Feret face databases demonstrate the effectiveness of the proposed method.

  2. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.

  3. Oral insulin--a perspective.

    PubMed

    Raj, N K Kavitha; Sharma, Chandra P

    2003-01-01

    Diabetes mellitus is generally controlled quite well with the administration of oral medications or by the use of insulin injections. The current practice is the use of one or more doses, intermediate or long acting insulin per day. Oral insulin is a promising yet experimental method providing tight glycemic control for patients with diabetes. A biologically adhesive delivery systems offer important advantage over conventional drug delivery systems. The engineered polymer microspheres made of erodable polymer display strong adhesive interactions with gastrointestinal mucus and cellular lining can traverse both the mucosal epithelium and the follicle associated epithelium covering the lymphoid tissue of Peyer's patches. Alginate, a natural polymer recovered from seaweed is being developed as a nanoparticle for the delivery of insulin without being destroyed in the stomach. Alginate is in fact finding application in biotechnology industry as thickening agent, a gelling agent and a colloid stabilizer. Alginate has in addition, several other properties that have enabled it to be used as a matrix for entrapment and for the delivery of a variety of proteins such as insulin and cells. These properties include: a relatively inert aqueous environment within the matrix; a mild room temperature encapsulation process free of organic solvents; a high gel porosity which allows for high diffusion rates of macromolecules; the ability to control this porosity with simple coating procedures and dissolution and biodegradation of the system under normal physiological conditions.

  4. Regular attractors and nonautonomous perturbations of them

    SciTech Connect

    Vishik, Marko I; Zelik, Sergey V; Chepyzhov, Vladimir V

    2013-01-31

    We study regular global attractors of dissipative dynamical semigroups with discrete or continuous time and we investigate attractors for nonautonomous perturbations of such semigroups. The main theorem states that the regularity of global attractors is preserved under small nonautonomous perturbations. Moreover, nonautonomous regular global attractors remain exponential and robust. We apply these general results to model nonautonomous reaction-diffusion systems in a bounded domain of R{sup 3} with time-dependent external forces. Bibliography: 22 titles.

  5. State-Space Regularization: Geometric Theory

    SciTech Connect

    Chavent, G.; Kunisch, K.

    1998-05-15

    Regularization of nonlinear ill-posed inverse problems is analyzed for a class of problems that is characterized by mappings which are the composition of a well-posed nonlinear and an ill-posed linear mapping. Regularization is carried out in the range of the nonlinear mapping. In applications this corresponds to the state-space variable of a partial differential equation or to preconditioning of data. The geometric theory of projection onto quasi-convex sets is used to analyze the stabilizing properties of this regularization technique and to describe its asymptotic behavior as the regularization parameter tends to zero.

  6. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  7. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... advances without approval of the NCUA Board for a period of six months after becoming a member. This subsection shall not apply to any credit union which becomes a Regular member of the Facility within six... member of the Facility at any time within six months prior to becoming a Regular member of the Facility....

  8. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-04-01

    Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.

  9. Transport Code for Regular Triangular Geometry

    1993-06-09

    DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.

  10. On regularizations of the Dirac delta distribution

    NASA Astrophysics Data System (ADS)

    Hosseini, Bamdad; Nigam, Nilima; Stockie, John M.

    2016-01-01

    In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions SH to a singular term S as a parameter H (associated with the support size of SH) shrinks to zero. We characterize this convergence in both the weak-* topology of distributions and a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.

  11. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  12. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  13. Coupling regularizes individual units in noisy populations.

    PubMed

    Ly, Cheng; Ermentrout, G Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators. PMID:20365403

  14. Coupling regularizes individual units in noisy populations

    NASA Astrophysics Data System (ADS)

    Ly, Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  15. Coupling regularizes individual units in noisy populations

    SciTech Connect

    Ly Cheng; Ermentrout, G. Bard

    2010-01-15

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  16. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  17. Continuum regularization of gauge theory with fermions

    SciTech Connect

    Chan, H.S.

    1987-03-01

    The continuum regularization program is discussed in the case of d-dimensional gauge theory coupled to fermions in an arbitrary representation. Two physically equivalent formulations are given. First, a Grassmann formulation is presented, which is based on the two-noise Langevin equations of Sakita, Ishikawa and Alfaro and Gavela. Second, a non-Grassmann formulation is obtained by regularized integration of the matter fields within the regularized Grassmann system. Explicit perturbation expansions are studied in both formulations, and considerable simplification is found in the integrated non-Grassmann formalism.

  18. Parallel Communicating Grammar Systems with Regular Control

    NASA Astrophysics Data System (ADS)

    Pardubská, Dana; Plátek, Martin; Otto, Friedrich

    Parallel communicating grammar systems with regular control (RPCGS, for short) are introduced, which are obtained from returning regular parallel communicating grammar systems by restricting the derivations that are executed in parallel by the various components through a regular control language. For the class of languages that are generated by RPCGSs with constant communication complexity we derive a characterization in terms of a restricted type of freely rewriting restarting automaton. From this characterization we obtain that these languages are semi-linear, and that centralized RPCGSs with constant communication complexity are of the same generative power as non-centralized RPCGSs with constant communication complexity.

  19. Regularization schemes and the multiplicative anomaly

    NASA Astrophysics Data System (ADS)

    Evans, T. S.

    1999-06-01

    Elizalde, Vanzo, and Zerbini have shown that the effective action of two free Euclidean scalar fields in flat space contains a `multiplicative anomaly' when ζ-function regularization is used. This is related to the Wodzicki residue. I show that there is no anomaly when using a wide range of other regularization schemes and that the anomaly can be removed by an unusual choice of renormalization scales. I define new types of anomalies and show that they have similar properties. Thus multiplicative anomalies encode no novel physics. They merely illustrate some dangerous aspects of ζ-function and Schwinger proper time regularization schemes.

  20. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  1. Parallelization of irregularly coupled regular meshes

    NASA Technical Reports Server (NTRS)

    Chase, Craig; Crowley, Kay; Saltz, Joel; Reeves, Anthony

    1992-01-01

    Regular meshes are frequently used for modeling physical phenomena on both serial and parallel computers. One advantage of regular meshes is that efficient discretization schemes can be implemented in a straight forward manner. However, geometrically-complex objects, such as aircraft, cannot be easily described using a single regular mesh. Multiple interacting regular meshes are frequently used to describe complex geometries. Each mesh models a subregion of the physical domain. The meshes, or subdomains, can be processed in parallel, with periodic updates carried out to move information between the coupled meshes. In many cases, there are a relatively small number (one to a few dozen) subdomains, so that each subdomain may also be partitioned among several processors. We outline a composite run-time/compile-time approach for supporting these problems efficiently on distributed-memory machines. These methods are described in the context of a multiblock fluid dynamics problem developed at LaRC.

  2. Regular Exercise: Antidote for Deadly Diseases?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160326.html Regular Exercise: Antidote for Deadly Diseases? High levels of physical ... Aug. 9, 2016 (HealthDay News) -- Getting lots of exercise may reduce your risk for five common diseases, ...

  3. Mixed-Norm Regularization for Brain Decoding

    PubMed Central

    Flamary, R.; Jrad, N.; Phlypo, R.; Congedo, M.; Rakotomamonjy, A.

    2014-01-01

    This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly. PMID:24860614

  4. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-01-01

    Breit, Gupta, and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the fifth-time of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, difficulties arise which, in general, ruins the scheme. A successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest.

  5. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  6. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.

  7. Perturbations in a regular bouncing universe

    SciTech Connect

    Battefeld, T.J.; Geshnizjani, G.

    2006-03-15

    We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.

  8. Strong regularizing effect of integrable systems

    SciTech Connect

    Zhou, Xin

    1997-11-01

    Many time evolution problems have the so-called strong regularization effect, that is, with any irregular initial data, as soon as becomes greater than 0, the solution becomes C{sup {infinity}} for both spacial and temporal variables. This paper studies 1 x 1 dimension integrable systems for such regularizing effect. In the work by Sachs, Kappler [S][K], (see also earlier works [KFJ] and [Ka]), strong regularizing effect is proved for KdV with rapidly decaying irregular initial data, using the inverse scattering method. There are two equivalent Gel`fand-Levitan-Marchenko (GLM) equations associated to an inverse scattering problem, one is normalized at x = {infinity} and another at x = {infinity}. The method of [S][K] relies on the fact that the KdV waves propagate only in one direction and therefore one of the two GLM equations remains normalized and can be differentiated infinitely many times. 15 refs.

  9. Shadow of rotating regular black holes

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.

    2016-05-01

    We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.

  10. Numerical Comparison of Two-Body Regularizations

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2007-06-01

    We numerically compare four schemes to regularize a three-dimensional two-body problem under perturbations: the Sperling-Bürdet (S-B), Kustaanheimo-Stiefel (K-S), and Bürdet-Ferrandiz (B-F) regularizations, and a three-dimensional extension of the Levi-Civita (L-C) regularization we developed recently. As for the integration time of the equation of motion, the least time is needed for the unregularized treatment, followed by the K-S, the extended L-C, the B-F, and the S-B regularizations. However, these differences become significantly smaller when the time to evaluate perturbations becomes dominant. As for the integration error after one close encounter, the K-S and the extended L-C regularizations are tied for the least error, followed by the S-B, the B-F, and finally the unregularized scheme for unperturbed orbits with eccentricity less than 2. This order is not changed significantly by various kinds of perturbations. As for the integration error of elliptical orbits after multiple orbital periods, the situation remains the same except for the rank of the S-B scheme, which varies from the best to the second worst depending on the length of integration and/or on the nature of perturbations. Also, we confirm that Kepler energy scaling enhances the performance of the unregularized, K-S, and extended L-C schemes. As a result, the K-S and the extended L-C regularizations with Kepler energy scaling provide the best cost performance in integrating almost all the perturbed two-body problems.

  11. Regular homotopy for immersions of graphs into surfaces

    NASA Astrophysics Data System (ADS)

    Permyakov, D. A.

    2016-06-01

    We study invariants of regular immersions of graphs into surfaces up to regular homotopy. The concept of the winding number is used to introduce a new simple combinatorial invariant of regular homotopy. Bibliography: 20 titles.

  12. Regular transport dynamics produce chaotic travel times

    NASA Astrophysics Data System (ADS)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  13. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  14. REGULAR VERSUS DIFFUSIVE PHOTOSPHERIC FLUX CANCELLATION

    SciTech Connect

    Litvinenko, Yuri E.

    2011-04-20

    Observations of photospheric flux cancellation on the Sun imply that cancellation can be a diffusive rather than regular process. A criterion is derived, which quantifies the parameter range in which diffusive photospheric cancellation should occur. Numerical estimates show that regular cancellation models should be expected to give a quantitatively accurate description of photospheric cancellation. The estimates rely on a recently suggested scaling for a turbulent magnetic diffusivity, which is consistent with the diffusivity measurements on spatial scales varying by almost two orders of magnitude. Application of the turbulent diffusivity to large-scale dispersal of the photospheric magnetic flux is discussed.

  15. A Quantitative Measure of Memory Reference Regularity

    SciTech Connect

    Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A

    2001-10-01

    The memory performance of applications on existing architectures depends significantly on hardware features like prefetching and caching that exploit the locality of the memory accesses. The principle of locality has guided the design of many key micro-architectural features, including cache hierarchies, TLBs, and branch predictors. Quantitative measures of spatial and temporal locality have been useful for predicting the performance of memory hierarchy components. Unfortunately, the concept of locality is constrained to capturing memory access patterns characterized by proximity, while sophisticated memory systems are capable of exploiting other predictable access patterns. Here, we define the concepts of spatial and temporal regularity, and introduce a measure of spatial access regularity to quantify some of this predictability in access patterns. We present an efficient online algorithm to dynamically determine the spatial access regularity in an application's memory references, and demonstrate its use on a set of regular and irregular codes. We find that the use of our algorithm, with its associated overhead of trace generation, slows typical applications by a factor of 50-200, which is at least an order of magnitude better than traditional full trace generation approaches. Our approach can be applied to the characterization of program access patterns and in the implementation of sophisticated, software-assisted prefetching mechanisms, and its inherently parallel nature makes it well suited for use with multi-threaded programs.

  16. Dyslexia in Regular Orthographies: Manifestation and Causation

    ERIC Educational Resources Information Center

    Wimmer, Heinz; Schurz, Matthias

    2010-01-01

    This article summarizes our research on the manifestation of dyslexia in German and on cognitive deficits, which may account for the severe reading speed deficit and the poor orthographic spelling performance that characterize dyslexia in regular orthographies. An only limited causal role of phonological deficits (phonological awareness,…

  17. Strategies of Teachers in the Regular Classroom

    ERIC Educational Resources Information Center

    De Leeuw, Renske Ria; De Boer, Anke Aaltje

    2016-01-01

    It is known that regular schoolteachers have difficulties in educating students with social, emotional and behavioral difficulties (SEBD), mainly because of their disruptive behavior. In order to manage the disruptive behavior of students with SEBD many advices and strategies are provided in educational literature. However, very little is known…

  18. TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION.

    PubMed

    Meerschaert, M M; Scheffler, H-P

    2013-04-01

    Karamata's Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated.

  19. TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION

    PubMed Central

    MEERSCHAERT, M. M.; SCHEFFLER, H.-P.

    2013-01-01

    Karamata’s Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated. PMID:24644367

  20. Regular Classroom Teachers' Perceptions of Mainstreaming Effects.

    ERIC Educational Resources Information Center

    Ringlaben, Ravic P.; Price, Jay R.

    To assess regular classroom teachers' perceptions of mainstreaming, a 22 item questionnaire was completed by 117 teachers (K through 12). Among results were that nearly half of the Ss indicated a lack of preparation for implementing mainstreaming; 47% tended to be very willing to accept aminstreamed students; 42% said mainstreaming was working…

  1. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  2. Learning regular expressions for clinical text classification

    PubMed Central

    Bui, Duy Duc An; Zeng-Treitler, Qing

    2014-01-01

    Objectives Natural language processing (NLP) applications typically use regular expressions that have been developed manually by human experts. Our goal is to automate both the creation and utilization of regular expressions in text classification. Methods We designed a novel regular expression discovery (RED) algorithm and implemented two text classifiers based on RED. The RED+ALIGN classifier combines RED with an alignment algorithm, and RED+SVM combines RED with a support vector machine (SVM) classifier. Two clinical datasets were used for testing and evaluation: the SMOKE dataset, containing 1091 text snippets describing smoking status; and the PAIN dataset, containing 702 snippets describing pain status. We performed 10-fold cross-validation to calculate accuracy, precision, recall, and F-measure metrics. In the evaluation, an SVM classifier was trained as the control. Results The two RED classifiers achieved 80.9–83.0% in overall accuracy on the two datasets, which is 1.3–3% higher than SVM's accuracy (p<0.001). Similarly, small but consistent improvements have been observed in precision, recall, and F-measure when RED classifiers are compared with SVM alone. More significantly, RED+ALIGN correctly classified many instances that were misclassified by the SVM classifier (8.1–10.3% of the total instances and 43.8–53.0% of SVM's misclassifications). Conclusions Machine-generated regular expressions can be effectively used in clinical text classification. The regular expression-based classifier can be combined with other classifiers, like SVM, to improve classification performance. PMID:24578357

  3. A novel combined regularization algorithm of total variation and Tikhonov regularization for open electrical impedance tomography.

    PubMed

    Liu, Jinzhen; Ling, Lin; Li, Gang

    2013-07-01

    A Tikhonov regularization method in the inverse problem of electrical impedance tomography (EIT) often results in a smooth distribution reconstruction, with which we can barely make a clear separation between the inclusions and background. The recently popular total variation (TV)regularization method including the lagged diffusivity (LD) method can sharpen the edges, and is robust to noise in a small convergence region. Therefore, in this paper, we propose a novel regularization method combining the Tikhonov and LD regularization methods. Firstly, we clarify the implementation details of the Tikhonov, LD and combined methods in two-dimensional open EIT by performing the current injection and voltage measurement on one boundary of the imaging object. Next, we introduce a weighted parameter to the Tikhonov regularization method aiming to explore the effect of the weighted parameter on the resolution and quality of reconstruction images with the inclusion at different depths. Then, we analyze the performance of these algorithms with noisy data. Finally, we evaluate the effect of the current injection pattern on reconstruction quality and propose a modified current injection pattern.The results indicate that the combined regularization algorithm with stable convergence is able to improve the reconstruction quality with sharp contrast and more robust to noise in comparison to the Tikhonov and LD regularization methods solely. In addition, the results show that the current injection pattern with a bigger driver angle leads to a better reconstruction quality.

  4. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  5. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  6. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  7. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  8. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  9. Regularity of nuclear structure under random interactions

    SciTech Connect

    Zhao, Y. M.

    2011-05-06

    In this contribution I present a brief introduction to simplicity out of complexity in nuclear structure, specifically, the regularity of nuclear structure under random interactions. I exemplify such simplicity by two examples: spin-zero ground state dominance and positive parity ground state dominance in even-even nuclei. Then I discuss two recent results of nuclear structure in the presence of random interactions, in collaboration with Prof. Arima. Firstly I discuss sd bosons under random interactions, with the focus on excited states in the yrast band. We find a few regular patterns in these excited levels. Secondly I discuss our recent efforts towards obtaining eigenvalues without diagonalizing the full matrices of the nuclear shell model Hamiltonian.

  10. Charged fermions tunneling from regular black holes

    SciTech Connect

    Sharif, M. Javed, W.

    2012-11-15

    We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.

  11. Tracking magnetogram proper motions by multiscale regularization

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.

    1995-01-01

    Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.

  12. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  13. A regular version of Smilansky model

    SciTech Connect

    Barseghyan, Diana; Exner, Pavel

    2014-04-15

    We discuss a modification of Smilansky model in which a singular potential “channel” is replaced by a regular, below unbounded potential which shrinks as it becomes deeper. We demonstrate that, similarly to the original model, such a system exhibits a spectral transition with respect to the coupling constant, and determine the critical value above which a new spectral branch opens. The result is generalized to situations with multiple potential “channels.”.

  14. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  15. A regularization approach to hydrofacies delineation

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  16. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  17. Charge-regularization effects on polyelectrolytes

    NASA Astrophysics Data System (ADS)

    Muthukumar, Murugappan

    2012-02-01

    When electrically charged macromolecules are dispersed in polar solvents, their effective net charge is generally different from their chemical charges, due to competition between counterion adsorption and the translational entropy of dissociated counterions. The effective charge changes significantly as the experimental conditions change such as variations in solvent quality, temperature, and the concentration of added small electrolytes. This charge-regularization effect leads to major difficulties in interpreting experimental data on polyelectrolyte solutions and challenges in understanding the various polyelectrolyte phenomena. Even the most fundamental issue of experimental determination of molar mass of charged macromolecules by light scattering method has been difficult so far due to this feature. We will present a theory of charge-regularization of flexible polyelectrolytes in solutions and discuss the consequences of charge-regularization on (a) experimental determination of molar mass of polyelectrolytes using scattering techniques, (b) coil-globule transition, (c) macrophase separation in polyelectrolyte solutions, (c) phase behavior in coacervate formation, and (d) volume phase transitions in polyelectrolyte gels.

  18. Automatic detection of regularly repeating vocalizations

    NASA Astrophysics Data System (ADS)

    Mellinger, David

    2005-09-01

    Many animal species produce repetitive sounds at regular intervals. This regularity can be used for automatic recognition of the sounds, providing improved detection at a given signal-to-noise ratio. Here, the detection of sperm whale sounds is examined. Sperm whales produce highly repetitive ``regular clicks'' at periods of about 0.2-2 s, and faster click trains in certain behavioral contexts. The following detection procedure was tested: a spectrogram was computed; values within a certain frequency band were summed; time windowing was applied; each windowed segment was autocorrelated; and the maximum of the autocorrelation within a certain periodicity range was chosen. This procedure was tested on sets of recordings containing sperm whale sounds and interfering sounds, both low-frequency recordings from autonomous hydrophones and high-frequency ones from towed hydrophone arrays. An optimization procedure iteratively varies detection parameters (spectrogram frame length and frequency range, window length, periodicity range, etc.). Performance of various sets of parameters was measured by setting a standard level of allowable missed calls, and the resulting optimium parameters are described. Performance is also compared to that of a neural network trained using the data sets. The method is also demonstrated for sounds of blue whales, minke whales, and seismic airguns. [Funding from ONR.

  19. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  20. Regularized robust coding for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Yang, Jian; Zhang, David

    2013-05-01

    Recently the sparse representation based classification (SRC) has been proposed for robust face recognition (FR). In SRC, the testing image is coded as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1 -norm of the coding residual. Such a sparse coding model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective enough to describe the coding residual in practical FR systems. Meanwhile, the sparsity constraint on the coding coefficients makes the computational cost of SRC very high. In this paper, we propose a new face coding model, namely regularized robust coding (RRC), which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively independent and identically distributed, the RRC seeks for a maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding (IR(3)C) algorithm is proposed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting, and expression changes, etc.

  1. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  2. Regular physical exercise: way to healthy life.

    PubMed

    Siddiqui, N I; Nessa, A; Hossain, M A

    2010-01-01

    Any bodily activity or movement that enhances and maintains overall health and physical fitness is called physical exercise. Habit of regular physical exercise has got numerous benefits. Exercise is of various types such as aerobic exercise, anaerobic exercise and flexibility exercise. Aerobic exercise moves the large muscle groups with alternate contraction and relaxation, forces to deep breath, heart to pump more blood with adequate tissue oxygenation. It is also called cardiovascular exercise. Examples of aerobic exercise are walking, running, jogging, swimming etc. In anaerobic exercise, there is forceful contraction of muscle with stretching, usually mechanically aided and help to build up muscle strength and muscle bulk. Examples are weight lifting, pulling, pushing, sprinting etc. Flexibility exercise is one type of stretching exercise to improve the movements of muscles, joints and ligaments. Walking is a good example of aerobic exercise, easy to perform, safe, effective, does not require any training or equipment and less chance of injury. Regular 30 minutes brisk walking in the morning with 150 minutes per week is a good exercise. Regular exercise improves the cardiovascular status, reduces the risk of cardiac disease, high blood pressure and cerebrovascular disease. It reduces body weight, improves insulin sensitivity, helps in glycemic control, prevents obesity and diabetes mellitus. It is helpful for relieving anxiety, stress, brings a sense of well being and overall physical fitness. Global trend is mechanization, labor savings and leading to epidemic of long term chronic diseases like diabetes mellitus, cardiovascular diseases etc. All efforts should be made to create public awareness promoting physical activity, physically demanding recreational pursuits and providing adequate facilities. PMID:20046192

  3. Total-variation regularization with bound constraints

    SciTech Connect

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  4. The regular state in higher order gravity

    NASA Astrophysics Data System (ADS)

    Cotsakis, Spiros; Kadry, Seifedine; Trachilis, Dimitrios

    2016-08-01

    We consider the higher-order gravity theory derived from the quadratic Lagrangian R + 𝜖R2 in vacuum as a first-order (ADM-type) system with constraints, and build time developments of solutions of an initial value formulation of the theory. We show that all such solutions, if analytic, contain the right number of free functions to qualify as general solutions of the theory. We further show that any regular analytic solution which satisfies the constraints and the evolution equations can be given in the form of an asymptotic formal power series expansion.

  5. New Regularization Method for EXAFS Analysis

    NASA Astrophysics Data System (ADS)

    Reich, Tatiana Ye.; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry; Reich, Tobias

    2007-02-01

    As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4ṡ2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.

  6. New Regularization Method for EXAFS Analysis

    SciTech Connect

    Reich, Tatiana Ye.; Reich, Tobias; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry

    2007-02-02

    As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4{center_dot}2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.

  7. Regular systems of inbreeding with mutation.

    PubMed

    Campbell, R B

    1988-08-01

    Probability of identity by type is studied for regular systems of inbreeding in the presence of mutation. Analytic results are presented for half-sib mating, first cousin mating, and half nth cousin mating under both infinite allele and two allele (back mutation) models. Reasonable rates of mutation do not provide significantly different results from probability of identity by descent in the absence of mutation. Homozygosity is higher under half-sib mating than under first cousin mating, but the expected number of copies of a gene in the population is higher under first cousin mating than under half-sib mating.

  8. Multichannel image regularization using anisotropic geodesic filtering

    SciTech Connect

    Grazzini, Jacopo A

    2010-01-01

    This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.

  9. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  10. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  11. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  12. The Regularity of Optimal Irrigation Patterns

    NASA Astrophysics Data System (ADS)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 < α < 1. The aim of this paper is to prove the regularity of paths (rivers, branches,...) when the irrigated measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  13. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  14. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  15. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  16. Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2016-01-01

    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…

  17. The Essential Special Education Guide for the Regular Education Teacher

    ERIC Educational Resources Information Center

    Burns, Edward

    2007-01-01

    The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…

  18. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  19. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  20. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  1. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  2. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  3. Regularity of free boundaries a heuristic retro

    PubMed Central

    Caffarelli, Luis A.; Shahgholian, Henrik

    2015-01-01

    This survey concerns regularity theory of a few free boundary problems that have been developed in the past half a century. Our intention is to bring up different ideas and techniques that constitute the fundamentals of the theory. We shall discuss four different problems, where approaches are somewhat different in each case. Nevertheless, these problems can be divided into two groups: (i) obstacle and thin obstacle problem; (ii) minimal surfaces, and cavitation flow of a perfect fluid. In each case, we shall only discuss the methodology and approaches, giving basic ideas and tools that have been specifically designed and tailored for that particular problem. The survey is kept at a heuristic level with mainly geometric interpretation of the techniques and situations in hand. PMID:26261372

  4. Local orientational mobility in regular hyperbranched polymers

    NASA Astrophysics Data System (ADS)

    Dolgushev, Maxim; Markelov, Denis A.; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  5. Generalized equations of state and regular universes

    NASA Astrophysics Data System (ADS)

    Contreras, F.; Cruz, N.; González, E.

    2016-05-01

    We found non singular solutions for universes filled with a fluid which obey a Generalized Equation of State of the form P(ρ) = - Aρ + γρλ. An emergent universe is obtained if A =1 and λ = 1/2. If the matter source is reinterpret as that of a scalar matter field with some potential, the corresponding potential is derived. For a closed universe, an exact bounce solution is found for A = 1/3 and the same λ. We also explore how the composition of theses universes ean be interpreted in terms of known fluids. It is of interest to note that accelerated solutions previously found for the late time evolution also represent regular solutions at early times.

  6. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes. PMID:27575171

  7. Regularization of Motion Equations with L-Transformation and Numerical Integration of the Regular Equations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, Sergei M.

    2003-04-01

    The sets of L-matrices of the second, fourth and eighth orders are constructed axiomatically. The defining relations are taken from the regularization of motion equations for Keplerian problem. In particular, the Levi-Civita matrix and KS-matrix are L-matrices of second and fourth order, respectively. A theorem on the ranks of L-transformations of different orders is proved. The notion of L-similarity transformation is introduced, certain sets of L-matrices are constructed, and their classification is given. An application of fourth order L-matrices for N-body problem regularization is given. A method of correction for regular coordinates in the Runge-Kutta-Fehlberg integration method for regular motion equations of a perturbed two-body problem is suggested. Comparison is given for the results of numerical integration in the problem of defining the orbit of a satellite, with and without the above correction method. The comparison is carried out with respect to the number of calls to the subroutine evaluating the perturbational accelerations vector. The results of integration using the correction turn out to be in a favorable position.

  8. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  9. Regularization for Atmospheric Temperature Retrieval Problems

    NASA Technical Reports Server (NTRS)

    Velez-Reyes, Miguel; Galarza-Galarza, Ruben

    1997-01-01

    Passive remote sensing of the atmosphere is used to determine the atmospheric state. A radiometer measures microwave emissions from earth's atmosphere and surface. The radiance measured by the radiometer is proportional to the brightness temperature. This brightness temperature can be used to estimate atmospheric parameters such as temperature and water vapor content. These quantities are of primary importance for different applications in meteorology, oceanography, and geophysical sciences. Depending on the range in the electromagnetic spectrum being measured by the radiometer and the atmospheric quantities to be estimated, the retrieval or inverse problem of determining atmospheric parameters from brightness temperature might be linear or nonlinear. In most applications, the retrieval problem requires the inversion of a Fredholm integral equation of the first kind making this an ill-posed problem. The numerical solution of the retrieval problem requires the transformation of the continuous problem into a discrete problem. The ill-posedness of the continuous problem translates into ill-conditioning or ill-posedness of the discrete problem. Regularization methods are used to convert the ill-posed problem into a well-posed one. In this paper, we present some results of our work in applying different regularization techniques to atmospheric temperature retrievals using brightness temperatures measured with the SSM/T-1 sensor. Simulation results are presented which show the potential of these techniques to improve temperature retrievals. In particular, no statistical assumptions are needed and the algorithms were capable of correctly estimating the temperature profile corner at the tropopause independent of the initial guess.

  10. Black hole mimickers: Regular versus singular behavior

    SciTech Connect

    Lemos, Jose P. S.; Zaslavskii, Oleg B.

    2008-07-15

    Black hole mimickers are possible alternatives to black holes; they would look observationally almost like black holes but would have no horizon. The properties in the near-horizon region where gravity is strong can be quite different for both types of objects, but at infinity it could be difficult to discern black holes from their mimickers. To disentangle this possible confusion, we examine the near-horizon properties, and their connection with far away asymptotic properties, of some candidates to black mimickers. We study spherically symmetric uncharged or charged but nonextremal objects, as well as spherically symmetric charged extremal objects. Within the uncharged or charged but nonextremal black hole mimickers, we study nonextremal {epsilon}-wormholes on the threshold of the formation of an event horizon, of which a subclass are called black foils, and gravastars. Within the charged extremal black hole mimickers we study extremal {epsilon}-wormholes on the threshold of the formation of an event horizon, quasi-black holes, and wormholes on the basis of quasi-black holes from Bonnor stars. We elucidate whether or not the objects belonging to these two classes remain regular in the near-horizon limit. The requirement of full regularity, i.e., finite curvature and absence of naked behavior, up to an arbitrary neighborhood of the gravitational radius of the object enables one to rule out potential mimickers in most of the cases. A list ranking the best black hole mimickers up to the worst, both nonextremal and extremal, is as follows: wormholes on the basis of extremal black holes or on the basis of quasi-black holes, quasi-black holes, wormholes on the basis of nonextremal black holes (black foils), and gravastars. Since in observational astrophysics it is difficult to find extremal configurations (the best mimickers in the ranking), whereas nonextremal configurations are really bad mimickers, the task of distinguishing black holes from their mimickers seems to

  11. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. PMID:27513219

  12. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone.

  13. Error analysis for matrix elastic-net regularization algorithms.

    PubMed

    Li, Hong; Chen, Na; Li, Luoqing

    2012-05-01

    Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.

  14. Preparation of Regular Specimens for Atom Probes

    NASA Technical Reports Server (NTRS)

    Kuhlman, Kim; Wishard, James

    2003-01-01

    A method of preparation of specimens of non-electropolishable materials for analysis by atom probes is being developed as a superior alternative to a prior method. In comparison with the prior method, the present method involves less processing time. Also, whereas the prior method yields irregularly shaped and sized specimens, the present developmental method offers the potential to prepare specimens of regular shape and size. The prior method is called the method of sharp shards because it involves crushing the material of interest and selecting microscopic sharp shards of the material for use as specimens. Each selected shard is oriented with its sharp tip facing away from the tip of a stainless-steel pin and is glued to the tip of the pin by use of silver epoxy. Then the shard is milled by use of a focused ion beam (FIB) to make the shard very thin (relative to its length) and to make its tip sharp enough for atom-probe analysis. The method of sharp shards is extremely time-consuming because the selection of shards must be performed with the help of a microscope, the shards must be positioned on the pins by use of micromanipulators, and the irregularity of size and shape necessitates many hours of FIB milling to sharpen each shard. In the present method, a flat slab of the material of interest (e.g., a polished sample of rock or a coated semiconductor wafer) is mounted in the sample holder of a dicing saw of the type conventionally used to cut individual integrated circuits out of the wafers on which they are fabricated in batches. A saw blade appropriate to the material of interest is selected. The depth of cut and the distance between successive parallel cuts is made such that what is left after the cuts is a series of thin, parallel ridges on a solid base. Then the workpiece is rotated 90 and the pattern of cuts is repeated, leaving behind a square array of square posts on the solid base. The posts can be made regular, long, and thin, as required for samples

  15. Grouping pursuit through a regularization solution surface *

    PubMed Central

    Shen, Xiaotong; Huang, Hsin-Cheng

    2010-01-01

    Summary Extracting grouping structure or identifying homogenous subgroups of predictors in regression is crucial for high-dimensional data analysis. A low-dimensional structure in particular–grouping, when captured in a regression model, enables to enhance predictive performance and to facilitate a model's interpretability Grouping pursuit extracts homogenous subgroups of predictors most responsible for outcomes of a response. This is the case in gene network analysis, where grouping reveals gene functionalities with regard to progression of a disease. To address challenges in grouping pursuit, we introduce a novel homotopy method for computing an entire solution surface through regularization involving a piecewise linear penalty. This nonconvex and overcomplete penalty permits adaptive grouping and nearly unbiased estimation, which is treated with a novel concept of grouped subdifferentials and difference convex programming for efficient computation. Finally, the proposed method not only achieves high performance as suggested by numerical analysis, but also has the desired optimality with regard to grouping pursuit and prediction as showed by our theoretical results. PMID:20689721

  16. Compression and regularization with the information bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, Dj; Schwab, David

    Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer.

  17. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  18. Wave dynamics of regular and chaotic rays

    SciTech Connect

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space.

  19. Color correction optimization with hue regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  20. Reverberation mapping by regularized linear inversion

    NASA Technical Reports Server (NTRS)

    Krolik, Julian H.; Done, Christine

    1995-01-01

    Reverberation mapping of active galactic nucleus (AGN) emission-line regions requires the numerical deconvolution of two time series. We suggest the application of a new method, regularized linear inversion, to the solution of this problem. This method possesses many good features; it imposes no restrictions on the sign of the response function; it can provide clearly defined uncertainty estimates; it involves no guesswork about unmeasured data; it can give a clear indication of when the underlying convolution model is inadequate; and it is computationally very efficient. Using simulated data, we find the minimum S/N and length of the time series in order for this method to work satisfactorily. We also define guidelines for choosing the principal tunable parameter of the method and for interpreting the results. Finally, we reanalyze published data from the 1989 NGC 5548 campaign using this new method and compare the results to those previously obtained by maximum entropy analysis. For some lines we find good agreement, but for others, especially C III lambda(1909) and Si IV lambda(1400), we find significant differences. These can be attributed to the inability of the maximum entropy method to find negative values of the response function, but also illustrate the nonuniqueness of any deconvolution technique. We also find evidence that certain line light curves (e.g., C IV lambda(1549)) cannot be fully described by the simple linear convolution model.

  1. Identifying Cognitive States Using Regularity Partitions

    PubMed Central

    2015-01-01

    Functional Magnetic Resonance (fMRI) data can be used to depict functional connectivity of the brain. Standard techniques have been developed to construct brain networks from this data; typically nodes are considered as voxels or sets of voxels with weighted edges between them representing measures of correlation. Identifying cognitive states based on fMRI data is connected with recording voxel activity over a certain time interval. Using this information, network and machine learning techniques can be applied to discriminate the cognitive states of the subjects by exploring different features of data. In this work we wish to describe and understand the organization of brain connectivity networks under cognitive tasks. In particular, we use a regularity partitioning algorithm that finds clusters of vertices such that they all behave with each other almost like random bipartite graphs. Based on the random approximation of the graph, we calculate a lower bound on the number of triangles as well as the expectation of the distribution of the edges in each subject and state. We investigate the results by comparing them to the state of the art algorithms for exploring connectivity and we argue that during epochs that the subject is exposed to stimulus, the inspected part of the brain is organized in an efficient way that enables enhanced functionality. PMID:26317983

  2. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  3. Determinants of Scanpath Regularity in Reading.

    PubMed

    von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan

    2015-09-01

    Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research.

  4. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2014-03-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter , to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  5. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2013-05-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space {{R}^3} and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter {α → 0}, to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  6. Temporal Regularity of the Environment Drives Time Perception

    PubMed Central

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be varied. In one experiment, we asked whether the last stimulus was regular or not. We found that participants exposed to an irregular environment frequently reported perfectly regularly paced stimuli to be irregular. In a second experiment, we asked participants to judge whether the final stimulus was presented before or after a flash. In this way, we were able to determine distortions in temporal perception as changes in the timing necessary for the sound and the flash to be perceived synchronous. We found that within a regular context, the perceived timing of deviant last stimuli changed so that the relative anisochrony appeared to be perceptually decreased. In the irregular context, the perceived timing of irregular stimuli following a regular sequence was not affected. These observations suggest that humans use temporal expectations to evaluate the regularity of sequences and that expectations are combined with sensory stimuli to adapt perceived timing to follow the statistics of the environment. Expectations can be seen as a-priori probabilities on which perceived timing of stimuli depend. PMID:27441686

  7. Elementary Particle Spectroscopy in Regular Solid Rewrite

    NASA Astrophysics Data System (ADS)

    Trell, Erik

    2008-10-01

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it "is the likely keystone of a fundamental computational foundation" also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)×O(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  8. Elementary Particle Spectroscopy in Regular Solid Rewrite

    SciTech Connect

    Trell, Erik

    2008-10-17

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  9. TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY

    SciTech Connect

    Crotts, Arlin P. S.

    2009-05-20

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: {approx}50% of reports originate from near Aristarchus, {approx}16% from Plato, {approx}6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that {approx}80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  10. Phase-regularized polygon computer-generated holograms.

    PubMed

    Im, Dajeong; Moon, Eunkyoung; Park, Yohan; Lee, Deokhwan; Hahn, Joonku; Kim, Hwi

    2014-06-15

    The dark-line defect problem in the conventional polygon computer-generated hologram (CGH) is addressed. To resolve this problem, we clarify the physical origin of the defect and address the concept of phase-regularization. A novel synthesis algorithm for a phase-regularized polygon CGH for generating photorealistic defect-free holographic images is proposed. The optical reconstruction results of the phase-regularized polygon CGHs without the dark-line defects are presented.

  11. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    ERIC Educational Resources Information Center

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  12. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  13. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  14. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  15. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  16. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  17. 12 CFR 311.5 - Regular procedure for closing meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Regular procedure for closing meetings. 311.5... RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5 Regular... a meeting will be taken only when a majority of the entire Board votes to take such action....

  18. Reading Comprehension and Regularized Orthography. Parts 1 and 2.

    ERIC Educational Resources Information Center

    Carvell, Robert L.

    The purpose of this study was to compare mature readers' comprehension of text presented in traditional orthography with their comprehension of text presented in a regularized orthography, specifically, to determine whether, when traditional orthography is regularized, any loss of meaning is attributable to the loss of the visual dissimilarity of…

  19. Inclusion Professional Development Model and Regular Middle School Educators

    ERIC Educational Resources Information Center

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  20. 29 CFR 778.408 - The specified regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... applicable).” The word “regular” describing the rate in this provision is not to be treated as surplusage. To... agreement in the courts. In both of the two cases before it, the Supreme Court found that the relationship... rate. There is no requirement, however, that the regular rate specified be equal to the regular rate...

  1. 32 CFR 724.211 - Regularity of government affairs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Regularity of government affairs. 724.211 Section 724.211 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY PERSONNEL NAVAL DISCHARGE REVIEW BOARD Authority/Policy for Departmental Discharge Review § 724.211 Regularity of...

  2. Gait variability and regularity of people with transtibial amputations.

    PubMed

    Parker, Kim; Hanada, Ed; Adderson, James

    2013-02-01

    Gait temporal-spatial variability and step regularity as measured by trunk accelerometry, measures relevant to fall risk and mobility, have not been well studied in individuals with lower-limb amputations. The study objective was to explore the differences in gait variability and regularity between individuals with unilateral transtibial amputations due to vascular (VAS) or nonvascular (NVAS) reasons and fall history over the past year. Of the 34 individuals with trans-tibial amputations who participated, 72% of the 18 individuals with VAS and 50% of the 16 individuals with NVAS had experienced at least one fall in the past year. The incidence of falls was not significantly different between groups. Variability measures included the coefficient of variation (CV) in swing time and step length obtained from an electronic walkway. Regularity measures included anteroposterior, medial-lateral and vertical step regularity obtained from trunk accelerations. When controlling for velocity, balance confidence and time since amputation, there were no significant differences in gait variability or regularity measures between individuals with VAS and NVAS. In comparing fallers to nonfallers, no significant differences were found in gait variability or regularity measures when controlling for velocity and balance confidence. Vertical step regularity (p=0.026) was found to be the only significant parameter related to fall history, while it only had poor to fair discriminatory ability related to fall history. There is some indication that individuals who have experienced a fall may walk with decreased regularity and this should be explored in future studies.

  3. 77 FR 76078 - Regular Board of Directors Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Sunshine Act Meeting TIME & DATE: 2:00 p.m., Wednesday, January 9, 2013.... Call to Order II. Executive Session III. Approval of the Regular Board of Directors Meeting Minutes...

  4. 77 FR 15142 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... From the Federal Register Online via the Government Publishing Office ] NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 2:30 p.m., Monday, March 26, 2012.... Executive Session III. Approval of the Regular Board of Directors Meeting Minutes IV. Approval of the...

  5. 76 FR 74831 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 1:30 p.m., Monday, December 5... . AGENDA: I. Call to Order II. Executive Session III. Approval of the Regular Board of Directors...

  6. 29 CFR 553.233 - “Regular rate” defined.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... OF THE FAIR LABOR STANDARDS ACT TO EMPLOYEES OF STATE AND LOCAL GOVERNMENTS Fire Protection and Law Enforcement Employees of Public Agencies Overtime Compensation Rules § 553.233 “Regular rate” defined. The rules for computing an employee's “regular rate”, for purposes of the Act's overtime pay...

  7. Regular expression order-sorted unification and matching

    PubMed Central

    Kutsia, Temur; Marin, Mircea

    2015-01-01

    We extend order-sorted unification by permitting regular expression sorts for variables and in the domains of function symbols. The obtained signature corresponds to a finite bottom-up unranked tree automaton. We prove that regular expression order-sorted (REOS) unification is of type infinitary and decidable. The unification problem presented by us generalizes some known problems, such as, e.g., order-sorted unification for ranked terms, sequence unification, and word unification with regular constraints. Decidability of REOS unification implies that sequence unification with regular hedge language constraints is decidable, generalizing the decidability result of word unification with regular constraints to terms. A sort weakening algorithm helps to construct a minimal complete set of REOS unifiers from the solutions of sequence unification problems. Moreover, we design a complete algorithm for REOS matching, and show that this problem is NP-complete and the corresponding counting problem is #P-complete. PMID:26523088

  8. Two hybrid regularization frameworks for solving the electrocardiography inverse problem

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Liu, Feng; Crozier, Stuart

    2008-09-01

    In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.

  9. Learning rates of lq coefficient regularization learning with gaussian kernel.

    PubMed

    Lin, Shaobo; Zeng, Jinshan; Fang, Jian; Xu, Zongben

    2014-10-01

    Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l(q) regularization schemes with 0 < q < ∞ are central in use. It is known that different q leads to different properties of the deduced estimators, say, l(2) regularization leads to a smooth estimator, while l(1) regularization leads to a sparse estimator. Then how the generalization capability of l(q) regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l(q) coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all 0 < q < ∞. That is, the upper and lower bounds of learning rates for l(q) regularization learning are asymptotically identical for all 0 < q < ∞. Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.

  10. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  11. A local-order regularization for geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Gheymasi, H. Mohammadi; Gholami, A.

    2013-11-01

    Different types of regularization have been developed to obtain stable solutions to linear inverse problems. Among these, total variation (TV) is known as an edge preserver method, which leads to piecewise constant solutions and has received much attention for solving inverse problems arising in geophysical studies. However, the method shows staircase effects and is not suitable for the models including smooth regions. To overcome the staircase effect, we present a method, which employs a local-order difference operator in the regularization term. This method is performed in two steps: First, we apply a pre-processing step to find the edge locations in the regularized solution using a properly defined minmod limiter, where the edges are determined by a comparison of the solutions obtained using different order regularizations of the TV types. Then, we construct a local-order difference operator based on the information obtained from the pre-processing step about the edge locations, which is subsequently used as a regularization operator in the final sparsity-promoting regularization. Experimental results from the synthetic and real seismic traveltime tomography show that the proposed inversion method is able to retain the smooth regions of the regularized solution, while preserving sharp transitions presented in it.

  12. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  13. Fractional norm regularization: learning with very few relevant features.

    PubMed

    Kaban, Ata

    2013-06-01

    Learning in the presence of a large number of irrelevant features is an important problem in high-dimensional tasks. Previous studies have shown that L1-norm regularization can be effective in such cases while L2-norm regularization is not. Furthermore, work in compressed sensing suggests that regularization by nonconvex (e.g., fractional) semi-norms may outperform L1-regularization. However, for classification it is largely unclear when this may or may not be the case. In addition, the nonconvex problem is harder to solve than the convex L1 problem. In this paper, we provide a more in-depth analysis to elucidate the potential advantages and pitfalls of nonconvex regularization in the context of logistic regression where the regularization term employs the family of Lq semi-norms. First, using results from the phenomenon of concentration of norms and distances in high dimensions, we gain intuition about the working of sparse estimation when the dimensionality is very high. Second, using the probably approximately correct (PAC)-Bayes methodology, we give a data-dependent bound on the generalization error of Lq-regularized logistic regression, which is applicable to any algorithm that implements this model, and may be used to predict its generalization behavior from the training set alone. Third, we demonstrate the usefulness of our approach by experiments and applications, where the PAC-Bayes bound is used to guide the choice of semi-norm in the regularization term. The results support the conclusion that the optimal choice of regularization depends on the relative fraction of relevant versus irrelevant features, and a fractional norm with a small exponent is most suitable when the fraction of relevant features is very small.

  14. Exploring the spectrum of regularized bosonic string theory

    SciTech Connect

    Ambjørn, J. Makeenko, Y.

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  15. Regularity criterion for the 3D Hall-magneto-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dai, Mimi

    2016-07-01

    This paper studies the regularity problem for the 3D incompressible resistive viscous Hall-magneto-hydrodynamic (Hall-MHD) system. The Kolmogorov 41 phenomenological theory of turbulence [14] predicts that there exists a critical wavenumber above which the high frequency part is dominated by the dissipation term in the fluid equation. Inspired by this idea, we apply an approach of splitting the wavenumber combined with an estimate of the energy flux to obtain a new regularity criterion. The regularity condition presented here is weaker than conditions in the existing criteria (Prodi-Serrin type criteria) for the 3D Hall-MHD system.

  16. Factors distinguishing regular readers of breast cancer information in magazines.

    PubMed

    Johnson, J D

    1997-01-01

    This study examined the differences between women who were regular and occasional readers of breast cancer information in magazines. Based on uses and gratifications theory and the Health Belief Model, women respondents (n = 366) were predicted to differentially expose themselves to information. A discriminant analysis showed that women who were regular readers reported greater fear, perceived vulnerability, general health concern, personal experience, and surveillance need for breast cancer-related information. The results are discussed in terms of the potential positive and negative consequences of regular exposure to breast cancer information in magazines. PMID:9311097

  17. Some results on the spectra of strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Vieira, Luís António de Almeida; Mano, Vasco Moço

    2016-06-01

    Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.

  18. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    PubMed

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  19. Quaternion regularization and stabilization of perturbed central motion. II

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    1993-04-01

    Generalized regular quaternion equations for the three-dimensional two-body problem in terms of Kustaanheimo-Stiefel variables are obtained within the framework of the quaternion theory of regularizing and stabilizing transformations of the Newtonian equations for perturbed central motion. Regular quaternion equations for perturbed central motion of a material point in a central field with a certain potential Pi are also derived in oscillatory and normal forms. In addition, systems of perturbed central motion equations are obtained which include quaternion equations of perturbed orbit orientations in oscillatory or normal form, and a generalized Binet equation is derived. A comparative analysis of the equations is carried out.

  20. Generic quantum walks with memory on regular graphs

    NASA Astrophysics Data System (ADS)

    Li, Dan; Mc Gettrick, Michael; Gao, Fei; Xu, Jie; Wen, Qiao-Yan

    2016-04-01

    Quantum walks with memory (QWM) are a type of modified quantum walks that record the walker's latest path. As we know, only two kinds of QWM have been presented up to now. It is desired to design more QWM for research, so that we can explore the potential of QWM. In this work, by presenting the one-to-one correspondence between QWM on a regular graph and quantum walks without memory (QWoM) on a line digraph of the regular graph, we construct a generic model of QWM on regular graphs. This construction gives a general scheme for building all possible standard QWM on regular graphs and makes it possible to study properties of different kinds of QWM. Here, by taking the simplest example, which is QWM with one memory on the line, we analyze some properties of QWM, such as variance, occupancy rate, and localization.

  1. Loop Invariants, Exploration of Regularities, and Mathematical Games.

    ERIC Educational Resources Information Center

    Ginat, David

    2001-01-01

    Presents an approach for illustrating, on an intuitive level, the significance of loop invariants for algorithm design and analysis. The illustration is based on mathematical games that require the exploration of regularities via problem-solving heuristics. (Author/MM)

  2. Regularization of the restricted problem of four bodies.

    NASA Technical Reports Server (NTRS)

    Giacaglia, G. E. O.

    1967-01-01

    Regularization of restricted three-body problem extended to case where three primaries of any mass revolve in circular orbits around common center of mass and fourth body of infinitesimal mass moves in their field

  3. What's Regular Exercise Worth? Maybe $2,500 Per Year

    MedlinePlus

    ... medlineplus.gov/news/fullstory_160859.html What's Regular Exercise Worth? Maybe $2,500 Per Year That's how ... afford the time and money to start an exercise routine? Maybe this will help: A new study ...

  4. Are Pupils in Special Education Too "Special" for Regular Education?

    NASA Astrophysics Data System (ADS)

    Pijl, Ysbrand J.; Pijl, Sip J.

    1998-01-01

    In the Netherlands special needs pupils are often referred to separate schools for the Educable Mentally Retarded (EMR) or the Learning Disabled (LD). There is an ongoing debate on how to reduce the growing numbers of special education placements. One of the main issues in this debate concerns the size of the difference in cognitive abilities between pupils in regular education and those eligible for LD or EMR education. In this study meta-analysis techniques were used to synthesize the findings from 31 studies on differences between pupils in regular primary education and those in special education in the Netherlands. Studies were grouped into three categories according to the type of measurements used: achievement, general intelligence and neuropsychological tests. It was found that pupils in regular education and those in special education differ in achievement and general intelligence. Pupils in schools for the educable mentally retarded in particular perform at a much lower level than is common in regular Dutch primary education.

  5. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  6. On almost regularity and π-normality of topological spaces

    NASA Astrophysics Data System (ADS)

    Saad Thabit, Sadeq Ali; Kamarulhaili, Hailiza

    2012-05-01

    π-Normality is a weaker version of normality. It was introduced by Kalantan in 2008. π-Normality lies between normality and almost normality (resp. quasi-normality). The importance of this topological property is that it behaves slightly different from normality and almost normality (quasi-normality). π-Normality is neither a productive nor a hereditary property in general. In this paper, some properties of almost regular spaces are presented. In particular, a few results on almost regular spaces are improved. Some relationships between almost regularity and π-normality are presented. π-Generalized closed sets are used to obtain a characterization and preservation theorems of π-normal spaces. Also, we investigate that an almost regular Lindelöf space (resp. with σ-locally finite base) is not necessarily π-normal by giving two counterexamples. An almost normality of the Rational Sequence topology is proved.

  7. A novel regularized edge-preserving super-resolution algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng

    2013-09-01

    Using super-resolution (SR) technology is a good approach to obtain high-resolution infrared image. However, Image super-resolution reconstruction is essentially an ill-posed problem, it is important to design an effective regularization term (image prior). Gaussian prior is widely used in the regularization term, but the reconstructed SR image becomes over-smoothness. Here, a novel regularization term called non-local means (NLM) term is derived based on the assumption that the natural image content is likely to repeat itself within some neighborhood. In the proposed framework, the estimated high image is obtained by minimizing a cost function. The iteration method is applied to solve the optimum problem. With the progress of iteration, the regularization term is adaptively updated. The proposed algorithm has been tested in several experiments. The experimental results show that the proposed approach is robust and can reconstruct higher quality images both in quantitative term and perceptual effect.

  8. Identifying basketball performance indicators in regular season and playoff games.

    PubMed

    García, Javier; Ibáñez, Sergio J; De Santos, Raúl Martinez; Leite, Nuno; Sampaio, Jaime

    2013-03-01

    The aim of the present study was to identify basketball game performance indicators which best discriminate winners and losers in regular season and playoffs. The sample used was composed by 323 games of ACB Spanish Basketball League from the regular season (n=306) and from the playoffs (n=17). A previous cluster analysis allowed splitting the sample in balanced (equal or below 12 points), unbalanced (between 13 and 28 points) and very unbalanced games (above 28 points). A discriminant analysis was used to identify the performance indicators either in regular season and playoff games. In regular season games, the winning teams dominated in assists, defensive rebounds, successful 2 and 3-point field-goals. However, in playoff games the winning teams' superiority was only in defensive rebounding. In practical applications, these results may help the coaches to accurately design training programs to reflect the importance of having different offensive set plays and also have specific conditioning programs to prepare for defensive rebounding.

  9. Estimating signal loss in regularized GRACE gravity field solutions

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Wahr, J. M.

    2011-05-01

    Gravity field solutions produced using data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission are subject to errors that increase as a function of increasing spatial resolution. Two commonly used techniques to improve the signal-to-noise ratio in the gravity field solutions are post-processing, via spectral filters, and regularization, which occurs within the least-squares inversion process used to create the solutions. One advantage of post-processing methods is the ability to easily estimate the signal loss resulting from the application of the spectral filter by applying the filter to synthetic gravity field coefficients derived from models of mass variation. This is a critical step in the construction of an accurate error budget. Estimating the amount of signal loss due to regularization, however, requires the execution of the full gravity field determination process to create synthetic instrument data; this leads to a significant cost in computation and expertise relative to post-processing techniques, and inhibits the rapid development of optimal regularization weighting schemes. Thus, while a number of studies have quantified the effects of spectral filtering, signal modification in regularized GRACE gravity field solutions has not yet been estimated. In this study, we examine the effect of one regularization method. First, we demonstrate that regularization can in fact be performed as a post-processing step if the solution covariance matrix is available. Regularization then is applied as a post-processing step to unconstrained solutions from the Center for Space Research (CSR), using weights reported by the Centre National d'Etudes Spatiales/Groupe de Recherches de geodesie spatiale (CNES/GRGS). After regularization, the power spectra of the CSR solutions agree well with those of the CNES/GRGS solutions. Finally, regularization is performed on synthetic gravity field solutions derived from a land surface model, revealing that in

  10. Note on regular black holes in a brane world

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2015-10-01

    In this work, we show that regular black holes in a Randall-Sundrum-type brane world model are generated by the nonlocal bulk influence, expressed by a constant parameter in the brane metric, only in the spherical case. In the axial case (black holes with rotation), this influence forbids them. A nonconstant bulk influence is necessary to generate regular black holes with rotation in this context.

  11. An adaptive Tikhonov regularization method for fluorescence molecular tomography.

    PubMed

    Cao, Xu; Zhang, Bin; Wang, Xin; Liu, Fei; Liu, Ke; Luo, Jianwen; Bai, Jing

    2013-08-01

    The high degree of absorption and scattering of photons propagating through biological tissues makes fluorescence molecular tomography (FMT) reconstruction a severe ill-posed problem and the reconstructed result is susceptible to noise in the measurements. To obtain a reasonable solution, Tikhonov regularization (TR) is generally employed to solve the inverse problem of FMT. However, with a fixed regularization parameter, the Tikhonov solutions suffer from low resolution. In this work, an adaptive Tikhonov regularization (ATR) method is presented. Considering that large regularization parameters can smoothen the solution with low spatial resolution, while small regularization parameters can sharpen the solution with high level of noise, the ATR method adaptively updates the spatially varying regularization parameters during the iteration process and uses them to penalize the solutions. The ATR method can adequately sharpen the feasible region with fluorescent probes and smoothen the region without fluorescent probes resorting to no complementary priori information. Phantom experiments are performed to verify the feasibility of the proposed method. The results demonstrate that the proposed method can improve the spatial resolution and reduce the noise of FMT reconstruction at the same time.

  12. The relationship between lifestyle regularity and subjective sleep quality

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  13. Nonlocal means-based regularizations for statistical CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Ma, Jianhua; Liu, Yan; Han, Hao; Li, Lihong; Wang, Jing; Liang, Zhengrong

    2014-03-01

    Statistical iterative reconstruction (SIR) methods have shown remarkable gains over the conventional filtered backprojection (FBP) method in improving image quality for low-dose computed tomography (CT). They reconstruct the CT images by maximizing/minimizing a cost function in a statistical sense, where the cost function usually consists of two terms: the data-fidelity term modeling the statistics of measured data, and the regularization term reflecting a prior information. The regularization term in SIR plays a critical role for successful image reconstruction, and an established family of regularizations is based on the Markov random field (MRF) model. Inspired by the success of nonlocal means (NLM) algorithm in image processing applications, we proposed, in this work, a family of generic and edgepreserving NLM-based regularizations for SIR. We evaluated one of them where the potential function takes the quadratic-form. Experimental results with both digital and physical phantoms clearly demonstrated that SIR with the proposed regularization can achieve more significant gains than SIR with the widely-used Gaussian MRF regularization and the conventional FBP method, in terms of image noise reduction and resolution preservation.

  14. Nondissipative Velocity and Pressure Regularizations for the ICON Model

    NASA Astrophysics Data System (ADS)

    Restelli, M.; Giorgetta, M.; Hundertmark, T.; Korn, P.; Reich, S.

    2009-04-01

    A challenging aspect in the numerical simulation of atmospheric and oceanic flows is the multiscale character of the problem both in space and time. The small spacial scales are generated by the turbulent energy and enstrophy cascades, and are usually dealt with by means of turbulence parametrizations, while the small temporal scales are governed by the propagation of acoustic and gravity waves, which are of little importance for the large scale dynamics and are often eliminated by means of a semi-implicit time discretization. We propose to treat both phenomena of subgrid turbulence and temporal scale separation in a unified way by means of nondissipative regularizations of the underlying model equations. More precisely, we discuss the use of two regularized equation sets: the velocity regularization, also know as Lagrangian averaged Navier-Stokes system, and the pressure regularization. Both regularizations are nondissipative since they do not enhance the dissipation of energy and enstrophy of the flow. The velocity regularization models the effects of the subgrid velocity fluctuations on the mean flow, it has thus been proposed as a turbulence parametrization and it has been found to yield promising results in ocean modeling [HHPW08]. In particular, the velocity regularization results in a higher variability of the numerical solution. The pressure regularization, discussed in [RWS07], modifies the propagation of acoustic and gravity waves so that the resulting system can be discretized explicitly in time with time steps analogous to those allowed by a semi-implicit method. Compared to semi-implicit time integrators, however, the pressure regularization takes fully into account the geostrophic balance of the flow. We discuss here the implementation of the velocity and pressure regularizations within the numerical framework of the ICON general circulation model (GCM) [BR05] for the case of the rotating shallow water system, showing how the original numerical

  15. Regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular salmeterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and FDA submissions in relation to salmeterol. The date of the most recent search was August 2011. Selection criteria We included controlled parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular salmeterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 26 trials comparing salmeterol to placebo and eight trials comparing with salbutamol. These included 62,815 participants with asthma (including 2,599 children). In six trials (2,766 patients), no serious adverse event data could be obtained. All-cause mortality was higher with regular salmeterol than placebo but the increase was not significant (Peto odds ratio (OR) 1.33 (95% CI 0.85 to 2.08)). Non-fatal serious adverse events were significantly increased when regular salmeterol was compared with placebo (OR 1.15 95% CI 1.02 to 1.29). One extra serious adverse event occurred over 28 weeks for every 188 people

  16. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    SciTech Connect

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  17. A model and regularization scheme for ultrasonic beamforming clutter reduction.

    PubMed

    Byram, Brett; Dei, Kazuyuki; Tierney, Jaime; Dumont, Douglas

    2015-11-01

    Acoustic clutter produced by off-axis and multipath scattering is known to cause image degradation, and in some cases these sources may be the prime determinants of in vivo image quality. We have previously shown some success addressing these sources of image degradation by modeling the aperture domain signal from different sources of clutter, and then decomposing aperture domain data using the modeled sources. Our previous model had some shortcomings including model mismatch and failure to recover B-Mode speckle statistics. These shortcomings are addressed here by developing a better model and by using a general regularization approach appropriate for the model and data. We present results with L1 (lasso), L2 (ridge), and L1/L2 combined (elastic-net) regularization methods. We call our new method aperture domain model image reconstruction (ADMIRE). Our results demonstrate that ADMIRE with L1 regularization, or weighted toward L1 in the case of elastic-net regularization, have improved image quality. L1 by itself works well, but additional improvements are seen with elastic-net regularization over the pure L1 constraint. On in vivo example cases, L1 regularization showed mean contrast improvements of 4.6 and 6.8 dB on fundamental and harmonic images, respectively. Elastic net regularization (α = 0.9) showed mean contrast improvements of 17.8 dB on fundamental images and 11.8 dB on harmonic images. We also demonstrate that in uncluttered Field II simulations the decluttering algorithm produces the same contrast, contrast-tonoise ratio, and speckle SNR as normal B-mode imaging, demonstrating that ADMIRE preserves typical image features.

  18. Multiscale regularized reconstruction for enhancing microcalcification in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir; Zhou, Chuan

    2012-03-01

    Digital breast tomosynthesis (DBT) holds strong promise for improving the sensitivity of detecting subtle mass lesions. Detection of microcalcifications is more difficult because of high noise and subtle signals in the large DBT volume. It is important to enhance the contrast-to-noise ratio (CNR) of microcalcifications in DBT reconstruction. A major challenge of implementing microcalcification enhancement or noise regularization in DBT reconstruction is to preserve the image quality of masses, especially those with ill-defined margins and subtle spiculations. We are developing a new multiscale regularization (MSR) method for the simultaneous algebraic reconstruction technique (SART) to improve the CNR of microcalcifications without compromising the quality of masses. Each DBT slice is stratified into different frequency bands via wavelet decomposition and the regularization method applies different degrees of regularization to different frequency bands to preserve features of interest and suppress noise. Regularization is constrained by a characteristic map to avoid smoothing subtle microcalcifications. The characteristic map is generated via image feature analysis to identify potential microcalcification locations in the DBT volume. The MSR method was compared to the non-convex total pvariation (TpV) method and SART with no regularization (NR) in terms of the CNR and the full width at half maximum of the line profiles intersecting calcifications and mass spiculations in DBT of human subjects. The results demonstrated that SART regularized by the MSR method was superior to the TpV method for subtle microcalcifications in terms of CNR enhancement. The MSR method preserved the quality of subtle spiculations better than the TpV method in comparison to NR.

  19. Particle motion and Penrose processes around rotating regular black hole

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.

  20. Another look at statistical learning theory and regularization.

    PubMed

    Cherkassky, Vladimir; Ma, Yunqian

    2009-09-01

    The paper reviews and highlights distinctions between function-approximation (FA) and VC theory and methodology, mainly within the setting of regression problems and a squared-error loss function, and illustrates empirically the differences between the two when data is sparse and/or input distribution is non-uniform. In FA theory, the goal is to estimate an unknown true dependency (or 'target' function) in regression problems, or posterior probability P(y/x) in classification problems. In VC theory, the goal is to 'imitate' unknown target function, in the sense of minimization of prediction risk or good 'generalization'. That is, the result of VC learning depends on (unknown) input distribution, while that of FA does not. This distinction is important because regularization theory originally introduced under clearly stated FA setting [Tikhonov, N. (1963). On solving ill-posed problem and method of regularization. Doklady Akademii Nauk USSR, 153, 501-504; Tikhonov, N., & V. Y. Arsenin (1977). Solution of ill-posed problems. Washington, DC: W. H. Winston], has been later used under risk-minimization or VC setting. More recently, several authors [Evgeniou, T., Pontil, M., & Poggio, T. (2000). Regularization networks and support vector machines. Advances in Computational Mathematics, 13, 1-50; Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference and prediction. Springer; Poggio, T. and Smale, S., (2003). The mathematics of learning: Dealing with data. Notices of the AMS, 50 (5), 537-544] applied constructive methodology based on regularization framework to learning dependencies from data (under VC-theoretical setting). However, such regularization-based learning is usually presented as a purely constructive methodology (with no clearly stated problem setting). This paper compares FA/regularization and VC/risk minimization methodologies in terms of underlying theoretical assumptions. The control of model

  1. Regular treatment with formoterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular formoterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and Food and Drug Administration (FDA) submissions in relation to formoterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular formoterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second author checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 22 studies (8032 participants) comparing regular formoterol to placebo and salbutamol. Non-fatal serious adverse event data could be obtained for all participants from published studies comparing formoterol and placebo but only 80% of those comparing formoterol with salbutamol or terbutaline. Three deaths occurred on regular formoterol and none on placebo; this difference was not statistically significant. It was not possible to assess disease-specific mortality in view of the small number of deaths. Non-fatal serious adverse events were significantly increased when

  2. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  3. On Nonperiodic Euler Flows with Hölder Regularity

    NASA Astrophysics Data System (ADS)

    Isett, Philip; Oh, Sung-Jin

    2016-08-01

    In (Isett, Regularity in time along the coarse scale flow for the Euler equations, 2013), the first author proposed a strengthening of Onsager's conjecture on the failure of energy conservation for incompressible Euler flows with Hölder regularity not exceeding {1/3}. This stronger form of the conjecture implies that anomalous dissipation will fail for a generic Euler flow with regularity below the Onsager critical space {L_t^∞ B_{3,∞}^{1/3}} due to low regularity of the energy profile. This paper is the first and main paper in a series of two, the results of which may be viewed as first steps towards establishing the conjectured failure of energy regularity for generic solutions with Hölder exponent less than {1/5}. The main result of the present paper shows that any given smooth Euler flow can be perturbed in {C^{1/5-ɛ}_{t,x}} on any pre-compact subset of R× R^3 to violate energy conservation. Furthermore, the perturbed solution is no smoother than {C^{1/5-ɛ}_{t,x}}. As a corollary of this theorem, we show the existence of nonzero {C^{1/5-ɛ}_{t,x}} solutions to Euler with compact space-time support, generalizing previous work of the first author (Isett, Hölder continuous Euler flows in three dimensions with compact support in time, 2012) to the nonperiodic setting.

  4. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  5. A Generic Path Algorithm for Regularized Statistical Estimation

    PubMed Central

    Zhou, Hua; Wu, Yichao

    2014-01-01

    Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints lead to the popular lasso, fused-lasso, and other generalized ℓ1 penalized regression methods. In this article we follow a recent idea by Wu (2011, 2012) and propose an exact path solver based on ordinary differential equations (EPSODE) that works for any convex loss function and can deal with generalized ℓ1 penalties as well as more complicated regularization such as inequality constraints encountered in shape-restricted regressions and nonparametric density estimation. Non-asymptotic error bounds for the equality regularized estimates are derived. In practice, the EPSODE can be coupled with AIC, BIC, Cp or cross-validation to select an optimal tuning parameter, or provides a convenient model space for performing model averaging or aggregation. Our applications to generalized ℓ1 regularized generalized linear models, shape-restricted regressions, Gaussian graphical models, and nonparametric density estimation showcase the potential of the EPSODE algorithm. PMID:25242834

  6. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  7. Image Super-Resolution via Adaptive Regularization and Sparse Representation.

    PubMed

    Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei

    2016-07-01

    Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

  8. Fast multislice fluorescence molecular tomography using sparsity-inducing regularization.

    PubMed

    Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba

    2016-02-01

    Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.

  9. Spatially varying regularization of deconvolution in 3D microscopy.

    PubMed

    Seo, J; Hwang, S; Lee, J-M; Park, H

    2014-08-01

    Confocal microscopy has become an essential tool to explore biospecimens in 3D. Confocal microcopy images are still degraded by out-of-focus blur and Poisson noise. Many deconvolution methods including the Richardson-Lucy (RL) method, Tikhonov method and split-gradient (SG) method have been well received. The RL deconvolution method results in enhanced image quality, especially for Poisson noise. Tikhonov deconvolution method improves the RL method by imposing a prior model of spatial regularization, which encourages adjacent voxels to appear similar. The SG method also contains spatial regularization and is capable of incorporating many edge-preserving priors resulting in improved image quality. The strength of spatial regularization is fixed regardless of spatial location for the Tikhonov and SG method. The Tikhonov and the SG deconvolution methods are improved upon in this study by allowing the strength of spatial regularization to differ for different spatial locations in a given image. The novel method shows improved image quality. The method was tested on phantom data for which ground truth and the point spread function are known. A Kullback-Leibler (KL) divergence value of 0.097 is obtained with applying spatially variable regularization to the SG method, whereas KL value of 0.409 is obtained with the Tikhonov method. In tests on a real data, for which the ground truth is unknown, the reconstructed data show improved noise characteristics while maintaining the important image features such as edges.

  10. Regular and Irregular Mixing in Hydrocarbon Block Copolymers

    NASA Astrophysics Data System (ADS)

    Register, Richard; Beckingham, Bryan

    2014-03-01

    Since hydrocarbon polymers interact through relatively simple (dispersive) interactions, one might expect them to be described by simple models of mixing energetics, such as regular mixing. However, the pioneering work of Graessley on saturated hydrocarbon polymer blends showed that while regular mixing is obeyed in some cases, both positive and negative deviations (in the magnitude of the mixing enthalpy) from regular mixing are observed in other cases. Here, we describe the mixing energetics for two series of hydrocarbon polymers wherein the interaction strengths may be continuously tuned, and which can be readily incorporated into block copolymers. Random copolymers of styrene and medium-vinyl isoprene, in which either the isoprene or both the isoprene and styrene units have been saturated, obey regular mixing over the entire composition range and for both hydrogenated derivatives. Well-defined block copolymers with arbitrarily small interblock interaction strengths can be constructed from these units, permitting the interdomain spacing to be made arbitrarily large while holding the order-disorder transition temperature constant. However, block copolymers of hydrogenated polybutadiene with such random copolymers show very strong positive deviations from regular mixing when the styrene aromaticity is preserved, and sizable negative deviations when the styrene units are saturated to vinylcyclohexane. Both of these cases can be quantitatively described by a ternary mixing model.

  11. Modeling and Analyzing Web Service Behavior with Regular Flow Nets

    NASA Astrophysics Data System (ADS)

    Xie, Jingang; Tan, Qingping; Cao, Guorong

    Web services are emerging as a promising technology for the development of next generation distributed heterogeneous software systems. To support automated service composition and adaptation, there should be a formal approach for modeling Web service behavior. In this paper we present a novel methodology of modeling and analyzing based on regular flow nets—extended from Petri nets and YAWL. Firstly, we motivate the formal definition of regular flow nets. Secondly, the formalism for dealing with symbolic marking is developed and it is used to define symbolic coverability tree. Finally, an algorithm for generating symbolic coverability tree is presented. Using symbolic coverability tree we can analyze the properties of regular flow nets we concerned. The technology of modeling and analyzing we proposed allows us to deal with cyclic services and data dependence among services.

  12. Optimized Bayes variational regularization prior for 3D PET images.

    PubMed

    Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

    2014-09-01

    A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

  13. Regularity based descriptor computed from local image oscillations.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo; Legrand, Pierrick; Lutton, Evelyne

    2007-05-14

    This work presents a novel local image descriptor based on the concept of pointwise signal regularity. Local image regions are extracted using either an interest point or an interest region detector, and discriminative feature vectors are constructed by uniformly sampling the pointwise Hölderian regularity around each region center. Regularity estimation is performed using local image oscillations, the most straightforward method directly derived from the definition of the Hölder exponent. Furthermore, estimating the Hölder exponent in this manner has proven to be superior, in most cases, when compared to wavelet based estimation as was shown in previous work. Our detector shows invariance to illumination change, JPEG compression, image rotation and scale change. Results show that the proposed descriptor is stable with respect to variations in imaging conditions, and reliable performance metrics prove it to be comparable and in some instances better than SIFT, the state-of-the-art in local descriptors. PMID:19546918

  14. Analysis of the "Learning in Regular Classrooms" movement in China.

    PubMed

    Deng, M; Manset, G

    2000-04-01

    The Learning in Regular Classrooms experiment has evolved in response to China's efforts to educate its large population of students with disabilities who, until the mid-1980s, were denied a free education. In the Learning in Regular Classrooms, students with disabilities (primarily sensory impairments or mild mental retardation) are educated in neighborhood schools in mainstream classrooms. Despite difficulties associated with developing effective inclusive programming, this approach has contributed to a major increase in the enrollment of students with disabilities and increased involvement of schools, teachers, and parents in China's newly developing special education system. Here we describe the development of the Learning in Regular Classroom approach and the challenges associated with educating students with disabilities in China.

  15. Methods for determining regularization for atmospheric retrieval problems

    NASA Astrophysics Data System (ADS)

    Steck, Tilman

    2002-03-01

    The atmosphere of Earth has already been investigated by several spaceborne instruments, and several further instruments will be launched, e.g., NASA's Earth Observing System Aura platform and the European Space Agency's Environmental Satellite. To stabilize the results in atmospheric retrievals, constraints are used in the iteration process. Therefore hard constraints (discretization of the retrieval grid) and soft constraints (regularization operators) are included in the retrieval. Tikhonov regularization is often used as a soft constraint. In this study, different types of Tikhonov operator were compared, and several new methods were developed to determine the optimal strength of the constraint operationally. The resulting regularization parameters were applied successfully to an ozone retrieval from simulated nadir sounding spectra like those expected to be measured by the Tropospheric Emission Spectrometer, which is part of the Aura platform. Retrievals were characterized by means of estimated error, averaging kernel, vertical resolution, and degrees of freedom.

  16. Structural characterization of the packings of granular regular polygons

    NASA Astrophysics Data System (ADS)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  17. Breast ultrasound tomography with total-variation regularization

    SciTech Connect

    Huang, Lianjie; Li, Cuiping; Duric, Neb

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  18. Radial basis function networks and complexity regularization in function learning.

    PubMed

    Krzyzak, A; Linder, T

    1998-01-01

    In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.

  19. Manufacture of Regularly Shaped Sol-Gel Pellets

    NASA Technical Reports Server (NTRS)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  20. Local conservative regularizations of compressible magnetohydrodynamic and neutral flows

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2016-02-01

    Ideal systems like magnetohydrodynamics (MHD) and Euler flow may develop singularities in vorticity ( w =∇×v ). Viscosity and resistivity provide dissipative regularizations of the singularities. In this paper, we propose a minimal, local, conservative, nonlinear, dispersive regularization of compressible flow and ideal MHD, in analogy with the KdV regularization of the 1D kinematic wave equation. This work extends and significantly generalizes earlier work on incompressible Euler and ideal MHD. It involves a micro-scale cutoff length λ which is a function of density, unlike in the incompressible case. In MHD, it can be taken to be of order the electron collisionless skin depth c/ωpe. Our regularization preserves the symmetries of the original systems and, with appropriate boundary conditions, leads to associated conservation laws. Energy and enstrophy are subject to a priori bounds determined by initial data in contrast to the unregularized systems. A Hamiltonian and Poisson bracket formulation is developed and applied to generalize the constitutive relation to bound higher moments of vorticity. A "swirl" velocity field is identified, and shown to transport w/ρ and B/ρ, generalizing the Kelvin-Helmholtz and Alfvén theorems. The steady regularized equations are used to model a rotating vortex, MHD pinch, and a plane vortex sheet. The proposed regularization could facilitate numerical simulations of fluid/MHD equations and provide a consistent statistical mechanics of vortices/current filaments in 3D, without blowup of enstrophy. Implications for detailed analyses of fluid and plasma dynamic systems arising from our work are briefly discussed.

  1. Zigzag stacks and m-regular linear stacks.

    PubMed

    Chen, William Y C; Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2014-12-01

    The contact map of a protein fold is a graph that represents the patterns of contacts in the fold. It is known that the contact map can be decomposed into stacks and queues. RNA secondary structures are special stacks in which the degree of each vertex is at most one and each arc has length of at least two. Waterman and Smith derived a formula for the number of RNA secondary structures of length n with exactly k arcs. Höner zu Siederdissen et al. developed a folding algorithm for extended RNA secondary structures in which each vertex has maximum degree two. An equation for the generating function of extended RNA secondary structures was obtained by Müller and Nebel by using a context-free grammar approach, which leads to an asymptotic formula. In this article, we consider m-regular linear stacks, where each arc has length at least m and the degree of each vertex is bounded by two. Extended RNA secondary structures are exactly 2-regular linear stacks. For any m ≥ 2, we obtain an equation for the generating function of the m-regular linear stacks. For given m, we deduce a recurrence relation and an asymptotic formula for the number of m-regular linear stacks on n vertices. To establish the equation, we use the reduction operation of Chen, Deng, and Du to transform an m-regular linear stack to an m-reduced zigzag (or alternating) stack. Then we find an equation for m-reduced zigzag stacks leading to an equation for m-regular linear stacks. PMID:25455155

  2. Regularization of languages by adults and children: A mathematical framework.

    PubMed

    Rische, Jacquelyn L; Komarova, Natalia L

    2016-02-01

    The fascinating ability of humans to modify the linguistic input and "create" a language has been widely discussed. In the work of Newport and colleagues, it has been demonstrated that both children and adults have some ability to process inconsistent linguistic input and "improve" it by making it more consistent. In Hudson Kam and Newport (2009), artificial miniature language acquisition from an inconsistent source was studied. It was shown that (i) children are better at language regularization than adults and that (ii) adults can also regularize, depending on the structure of the input. In this paper we create a learning algorithm of the reinforcement-learning type, which exhibits patterns reported in Hudson Kam and Newport (2009) and suggests a way to explain them. It turns out that in order to capture the differences between children's and adults' learning patterns, we need to introduce a certain asymmetry in the learning algorithm. Namely, we have to assume that the reaction of the learners differs depending on whether or not the source's input coincides with the learner's internal hypothesis. We interpret this result in the context of a different reaction of children and adults to implicit, expectation-based evidence, positive or negative. We propose that a possible mechanism that contributes to the children's ability to regularize an inconsistent input is related to their heightened sensitivity to positive evidence rather than the (implicit) negative evidence. In our model, regularization comes naturally as a consequence of a stronger reaction of the children to evidence supporting their preferred hypothesis. In adults, their ability to adequately process implicit negative evidence prevents them from regularizing the inconsistent input, resulting in a weaker degree of regularization. PMID:26580218

  3. Adiabatic regularization of power spectra in nonminimally coupled chaotic inflation

    NASA Astrophysics Data System (ADS)

    Alinea, Allan L.

    2016-10-01

    We investigate the effect of adiabatic regularization on both the tensor- and scalar-perturbation power spectra in nonminimally coupled chaotic inflation. Similar to that of the minimally coupled general single-field inflation, we find that the subtraction term is suppressed by an exponentially decaying factor involving the number of e -folds. By following the subtraction term long enough beyond horizon crossing, the regularized power spectrum tends to the ``bare'' power spectrum. This study justifies the use of the unregularized (``bare'') power spectrum in standard calculations.

  4. Duration of growth suppressive effects of regular inhaled corticosteroids

    PubMed Central

    Doull, I.; Campbell, M.; Holgate, S.

    1998-01-01

    The growth of 50 children receiving regular inhaled corticosteroids was segregated into divisions of six weeks from the start of treatment and compared with their growth when not receiving regular corticosteroids using a random effects regression model. Growth suppression was most marked during the initial six weeks after starting treatment, with most suppression occurring during the initial 18 weeks. Thereafter the children's growth was similar to their growth when not receiving treatment. These findings have important consequences for patterns of treatment of asthma in children.

 PMID:9579164

  5. Risk Segmentation Related to the Offering of a Consumer-Directed Health Plan: A Case Study of Humana Inc.

    PubMed Central

    Tollen, Laura A; Ross, Murray N; Poor, Stephen

    2004-01-01

    Objective To determine whether the offering of a consumer-directed health plan (CDHP) is likely to cause risk segmentation in an employer group. Study Setting and Data Source The study population comprises the approximately 10,000 people (employees and dependents) enrolled as members of the employee health benefit program of Humana Inc. at its headquarters in Louisville, Kentucky, during the benefit years starting July 1, 2000, and July 1, 2001. This analysis is based on primary collection of claims, enrollment, and employment data for those employees and dependents. Study Design This is a case study of the experience of a single employer in offering two consumer-directed health plan options (“Coverage First 1” and “Coverage First 2”) to its employees. We assessed the risk profile of those choosing the Coverage First plans and those remaining in more traditional health maintenance organization (HMO) and preferred provider organization (PPO) coverage. Risk was measured using prior claims (in dollars per member per month), prior utilization (admissions/1,000; average length of stay; prescriptions/1,000; physician office visit services/1,000), a pharmacy-based risk assessment tool (developed by Ingenix), and demographics. Data Collection/Extraction Methods Complete claims and administrative data were provided by Humana Inc. for the two-year study period. Unique identifiers enabled us to track subscribers' individual enrollment and utilization over this period. Principal Findings Based on demographic data alone, there did not appear to be a difference in the risk profiles of those choosing versus not choosing Coverage First. However, based on prior claims and prior use data, it appeared that those who chose Coverage First were healthier than those electing to remain in more traditional coverage. For each of five services, prior-year usage by people who subsequently enrolled in Coverage First 1 (CF1) was below 60 percent of the average for the whole group

  6. A Unified Approach for Solving Nonlinear Regular Perturbation Problems

    ERIC Educational Resources Information Center

    Khuri, S. A.

    2008-01-01

    This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…

  7. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  8. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  9. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  10. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  11. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  12. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  13. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  14. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  15. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  16. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  17. Regularized Partial and/or Constrained Redundancy Analysis

    ERIC Educational Resources Information Center

    Takane, Yoshio; Jung, Sunho

    2008-01-01

    Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time,…

  18. Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing

    ERIC Educational Resources Information Center

    Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.

    2012-01-01

    Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…

  19. New Technologies in Portugal: Regular Middle and High School

    ERIC Educational Resources Information Center

    Florentino, Teresa; Sanchez, Lucas; Joyanes, Luis

    2010-01-01

    Purpose: The purpose of this paper is to elaborate upon the relation between information and communication technologies (ICT), particularly web-based resources, and their use, programs and learning in Portuguese middle and high regular public schools. Design/methodology/approach: Adding collected documentation on curriculum, laws and other related…

  20. Preverbal Infants Infer Intentional Agents from the Perception of Regularity

    ERIC Educational Resources Information Center

    Ma, Lili; Xu, Fei

    2013-01-01

    Human adults have a strong bias to invoke intentional agents in their intuitive explanations of ordered wholes or regular compositions in the world. Less is known about the ontogenetic origin of this bias. In 4 experiments, we found that 9-to 10-month-old infants expected a human hand, but not a mechanical tool with similar affordances, to be the…

  1. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school... SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  2. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school... SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  3. Distances and isomorphisms in 4-regular circulant graphs

    NASA Astrophysics Data System (ADS)

    Donno, Alfredo; Iacono, Donatella

    2016-06-01

    We compute the Wiener index and the Hosoya polynomial of the Cayley graph of some cyclic groups, with all possible generating sets containing four elements, up to isomorphism. We find out that the order 17 is the smallest case providing two non-isomorphic 4-regular circulant graphs with the same Wiener index. Some open problems and questions are listed.

  4. Exploring How Special and Regular Education Teachers Work Together Collaboratively

    ERIC Educational Resources Information Center

    Broyard-Baptiste, Erin

    2012-01-01

    This study was based on the need for additional research to explore the nature of collaborative teaching experiences in the K-12 education setting. For that reason, this study was designed to examine the experiences and perceptions of special education and regular education teachers with respect to inclusion and the perceptions of these teachers…

  5. The Hearing Impaired Student in the Regular Classroom.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    The guide provides strategies for teachers to use with deaf and hearing impaired (HI) students in regular classrooms in the province of Alberta, Canada. An introductory section includes symptoms of a suspected hearing loss and a sample audiogram to aid teachers in recognizing the problem. Ways to meet special needs at different age levels are…

  6. Acquisition of Formulaic Sequences in Intensive and Regular EFL Programmes

    ERIC Educational Resources Information Center

    Serrano, Raquel; Stengers, Helene; Housen, Alex

    2015-01-01

    This paper aims to analyse the role of time concentration of instructional hours on the acquisition of formulaic sequences in English as a foreign language (EFL). Two programme types that offer the same amount of hours of instruction are considered: intensive (110 hours/1 month) and regular (110 hours/7 months). The EFL learners under study are…

  7. Adult Regularization of Inconsistent Input Depends on Pragmatic Factors

    ERIC Educational Resources Information Center

    Perfors, Amy

    2016-01-01

    In a variety of domains, adults who are given input that is only partially consistent do not discard the inconsistent portion (regularize) but rather maintain the probability of consistent and inconsistent portions in their behavior (probability match). This research investigates the possibility that adults probability match, at least in part,…

  8. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  9. Multiple Learning Strategies Project. Medical Assistant. [Regular Vocational. Vol. 1.

    ERIC Educational Resources Information Center

    Varney, Beverly; And Others

    This instructional package, one of four designed for regular vocational students, focuses on the vocational area of medical assistant. Contained in this document are twenty-six learning modules organized into three units: language; receptioning; and asepsis. Each module includes these elements: a performance objective page telling what will be…

  10. Psychological Benefits of Regular Physical Activity: Evidence from Emerging Adults

    ERIC Educational Resources Information Center

    Cekin, Resul

    2015-01-01

    Emerging adulthood is a transitional stage between late adolescence and young adulthood in life-span development that requires significant changes in people's lives. Therefore, identifying protective factors for this population is crucial. This study investigated the effects of regular physical activity on self-esteem, optimism, and happiness in…

  11. Integrating Handicapped Children Into Regular Classrooms. (With Abstract Bibliography).

    ERIC Educational Resources Information Center

    Glockner, Mary

    This document is based on an interview with Dr. Jenny Klein, Director of Educational Services, Office of Child Development, who stresses the desirability of integrating handicapped children into regular classrooms. She urges the teacher to view the handicapped child as a normal child with some special needs. Specific suggestions for the teacher…

  12. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, G.C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  13. Interrupting Sitting Time with Regular Walks Attenuates Postprandial Triglycerides.

    PubMed

    Miyashita, M; Edamoto, K; Kidokoro, T; Yanaoka, T; Kashiwabara, K; Takahashi, M; Burns, S

    2016-02-01

    We compared the effects of prolonged sitting with the effects of sitting interrupted by regular walking and the effects of prolonged sitting after continuous walking on postprandial triglyceride in postmenopausal women. 15 participants completed 3 trials in random order: 1) prolonged sitting, 2) regular walking, and 3) prolonged sitting preceded by continuous walking. During the sitting trial, participants rested for 8 h. For the walking trials, participants walked briskly in either twenty 90-sec bouts over 8 h or one 30-min bout in the morning (09:00-09:30). Except for walking, both exercise trials mimicked the sitting trial. In each trial, participants consumed a breakfast (08:00) and lunch (11:00). Blood samples were collected in the fasted state and at 2, 4, 6 and 8 h after breakfast. The serum triglyceride incremental area under the curve was 15 and 14% lower after regular walking compared with prolonged sitting and prolonged sitting after continuous walking (4.73±2.50 vs. 5.52±2.95 vs. 5.50±2.59 mmol/L∙8 h respectively, main effect of trial: P=0.023). Regularly interrupting sitting time with brief bouts of physical activity can reduce postprandial triglyceride in postmenopausal women. PMID:26509374

  14. Effect of regular and decaffeinated coffee on serum gastrin levels.

    PubMed

    Acquaviva, F; DeFrancesco, A; Andriulli, A; Piantino, P; Arrigoni, A; Massarenti, P; Balzola, F

    1986-04-01

    We evaluated the hypothesis that the noncaffeine gastric acid stimulant effect of coffee might be by way of serum gastrin release. After 10 healthy volunteers drank 50 ml of coffee solution corresponding to one cup of home-made regular coffee containing 10 g of sugar and 240 mg/100 ml of caffeine, serum total gastrin levels peaked at 10 min and returned to basal values within 30 min; the response was of little significance (1.24 times the median basal value). Drinking 100 ml of sugared water (as control) resulted in occasional random elevations of serum gastrin which were not statistically significant. Drinking 100 ml of regular or decaffeinated coffee resulted in a prompt and lasting elevation of total gastrin; mean integrated outputs after regular or decaffeinated coffee were, respectively, 2.3 and 1.7 times the values in the control test. Regular and decaffeinated coffees share a strong gastrin-releasing property. Neither distension, osmolarity, calcium, nor amino acid content of the coffee solution can account for this property, which should be ascribed to some other unidentified ingredient. This property is at least partially lost during the process of caffeine removal. PMID:3745848

  15. Implicit Learning of L2 Word Stress Regularities

    ERIC Educational Resources Information Center

    Chan, Ricky K. W.; Leung, Janny H. C.

    2014-01-01

    This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…

  16. Integration of Dependent Handicapped Classes into the Regular School.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Guidelines are provided for integrating the dependent handicapped student (DHS) into the regular school in Alberta, Canada. A short overview comprises the introduction. Identified are two types of integration: (1) incidental contact and (2) planned contact for social, recreational, and educational activities with other students. Noted are types of…

  17. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  18. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable...

  19. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  20. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  1. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  2. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  3. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  4. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  5. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  6. Nonnative Processing of Verbal Morphology: In Search of Regularity

    ERIC Educational Resources Information Center

    Gor, Kira; Cook, Svetlana

    2010-01-01

    There is little agreement on the mechanisms involved in second language (L2) processing of regular and irregular inflectional morphology and on the exact role of age, amount, and type of exposure to L2 resulting in differences in L2 input and use. The article contributes to the ongoing debates by reporting the results of two experiments on Russian…

  7. 75 FR 23218 - Information Collection; Direct Loan Servicing-Regular

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-03

    ... loan agreements, assist the borrower in achieving business goals, and regular servicing of the loan... methods: Mail: J. Lee Nault, Loan Specialist, USDA/FSA/FLP, STOP 0523, 1400 Independence Avenue, SW., Washington, DC 20250-0503. E-mail: lee.nault@wdc.usda.gov . Fax: 202-690-0949. You may also send comments...

  8. Cost Effectiveness of Premium Versus Regular Gasoline in MCPS Buses.

    ERIC Educational Resources Information Center

    Baacke, Clifford M.; Frankel, Steven M.

    The primary question posed in this study is whether premium or regular gasoline is more cost effective for the Montgomery County Public School (MCPS) bus fleet, as a whole, when miles-per-gallon, cost-per-gallon, and repair costs associated with mileage are considered. On average, both miles-per-gallon, and repair costs-per-mile favor premium…

  9. The Student with Albinism in the Regular Classroom.

    ERIC Educational Resources Information Center

    Ashley, Julia Robertson

    This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…

  10. Identifying and Exploiting Spatial Regularity in Data Memory References

    SciTech Connect

    Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A; Schulz, M

    2003-07-24

    The growing processor/memory performance gap causes the performance of many codes to be limited by memory accesses. If known to exist in an application, strided memory accesses forming streams can be targeted by optimizations such as prefetching, relocation, remapping, and vector loads. Undetected, they can be a significant source of memory stalls in loops. Existing stream-detection mechanisms either require special hardware, which may not gather statistics for subsequent analysis, or are limited to compile-time detection of array accesses in loops. Formally, little treatment has been accorded to the subject; the concept of locality fails to capture the existence of streams in a program's memory accesses. The contributions of this paper are as follows. First, we define spatial regularity as a means to discuss the presence and effects of streams. Second, we develop measures to quantify spatial regularity, and we design and implement an on-line, parallel algorithm to detect streams - and hence regularity - in running applications. Third, we use examples from real codes and common benchmarks to illustrate how derived stream statistics can be used to guide the application of profile-driven optimizations. Overall, we demonstrate the benefits of our novel regularity metric as a low-cost instrument to detect potential for code optimizations affecting memory performance.

  11. New vision based navigation clue for a regular colonoscope's tip

    NASA Astrophysics Data System (ADS)

    Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy

    2009-02-01

    Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.

  12. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  13. Global Regularity for Several Incompressible Fluid Models with Partial Dissipation

    NASA Astrophysics Data System (ADS)

    Wu, Jiahong; Xu, Xiaojing; Ye, Zhuan

    2016-09-01

    This paper examines the global regularity problem on several 2D incompressible fluid models with partial dissipation. They are the surface quasi-geostrophic (SQG) equation, the 2D Euler equation and the 2D Boussinesq equations. These are well-known models in fluid mechanics and geophysics. The fundamental issue of whether or not they are globally well-posed has attracted enormous attention. The corresponding models with partial dissipation may arise in physical circumstances when the dissipation varies in different directions. We show that the SQG equation with either horizontal or vertical dissipation always has global solutions. This is in sharp contrast with the inviscid SQG equation for which the global regularity problem remains outstandingly open. Although the 2D Euler is globally well-posed for sufficiently smooth data, the associated equations with partial dissipation no longer conserve the vorticity and the global regularity is not trivial. We are able to prove the global regularity for two partially dissipated Euler equations. Several global bounds are also obtained for a partially dissipated Boussinesq system.

  14. Elementary Teachers' Perspectives of Inclusion in the Regular Education Classroom

    ERIC Educational Resources Information Center

    Olinger, Becky Lorraine

    2013-01-01

    The purpose of this qualitative study was to examine regular education and special education teacher perceptions of inclusion services in an elementary school setting. In this phenomenological study, purposeful sampling techniques and data were used to conduct a study of inclusion in the elementary schools. In-depth one-to-one interviews with 8…

  15. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. PMID:23846472

  16. Factors Contributing to Regular Smoking in Adolescents in Turkey

    ERIC Educational Resources Information Center

    Can, Gamze; Topbas, Murat; Oztuna, Funda; Ozgun, Sukru; Can, Emine; Yavuzyilmaz, Asuman

    2009-01-01

    Purpose: The objectives of this study were to determine the levels of lifetime cigarette use, daily use, and current use among young people (aged 15-19 years) and to examine the risk factors contributing to regular smoking. Methods: The number of students was determined proportionately to the numbers of students in all the high schools in the…

  17. Factors Relating to Regular Education Teacher Burnout in Inclusive Education

    ERIC Educational Resources Information Center

    Talmor, Rachel; Reiter, Shunit; Feigin, Neomi

    2005-01-01

    The aims of the research were to identify the environmental factors that relate to the work of regular school teachers who have students with special needs in their classroom, and to find out the correlation between these factors and teacher burnout. A total 330 primary school teachers filled in a questionnaire that had three parts: (1) personal…

  18. 75 FR 13598 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 10 a.m., Monday, March 22, 2010. PLACE: 1325 G Street, NW., Suite 800 Boardroom, Washington, DC 20005. STATUS: Open. CONTACT PERSON...

  19. 76 FR 14699 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 11 a.m., Tuesday, March 22, 2011. PLACE: 1325 G Street, NW., Suite 800, Boardroom, Washington, DC 20005. STATUS: Open. CONTACT PERSON...

  20. 75 FR 59747 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-28

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act Time and Date: 2 p.m.. Wednesday, September 22, 2010. Place: 1325 G Street NW., Suite 800, Boardroom, Washington, DC 20005. Status: Open....

  1. 75 FR 77010 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-10

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act Time AND Date 2:30 p.m., Wednesday, December 15, 2010. Place: 1325 G Street, NW., Suite 800, Boardroom, Washington, DC 20005. Status: Open....

  2. 78 FR 36794 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 9:30 a.m., Tuesday, June 25, 2013. PLACE: 999 North Capitol St NE., Suite 900, Gramlich Boardroom, Washington, DC 20002. STATUS:...

  3. 77 FR 58416 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 2:00 p.m., Monday, October 1, 2012.... Call to Order II. Executive Session III. Approval of the Annual Board of Directors Meeting Minutes...

  4. Sparse regularization techniques provide novel insights into outcome integration processes.

    PubMed

    Mohr, Holger; Wolfensteller, Uta; Frimmel, Steffi; Ruge, Hannes

    2015-01-01

    By exploiting information that is contained in the spatial arrangement of neural activations, multivariate pattern analysis (MVPA) can detect distributed brain activations which are not accessible by standard univariate analysis. Recent methodological advances in MVPA regularization techniques have made it feasible to produce sparse discriminative whole-brain maps with highly specific patterns. Furthermore, the most recent refinement, the Graph Net, explicitly takes the 3D-structure of fMRI data into account. Here, these advanced classification methods were applied to a large fMRI sample (N=70) in order to gain novel insights into the functional localization of outcome integration processes. While the beneficial effect of differential outcomes is well-studied in trial-and-error learning, outcome integration in the context of instruction-based learning has remained largely unexplored. In order to examine neural processes associated with outcome integration in the context of instruction-based learning, two groups of subjects underwent functional imaging while being presented with either differential or ambiguous outcomes following the execution of varying stimulus-response instructions. While no significant univariate group differences were found in the resulting fMRI dataset, L1-regularized (sparse) classifiers performed significantly above chance and also clearly outperformed the standard L2-regularized (dense) Support Vector Machine on this whole-brain between-subject classification task. Moreover, additional L2-regularization via the Elastic Net and spatial regularization by the Graph Net improved interpretability of discriminative weight maps but were accompanied by reduced classification accuracies. Most importantly, classification based on sparse regularization facilitated the identification of highly specific regions differentially engaged under ambiguous and differential outcome conditions, comprising several prefrontal regions previously associated with

  5. A simple way to measure daily lifestyle regularity

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.

    2002-01-01

    A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5

  6. Nonlinear Identification Using Orthogonal Forward Regression With Nested Optimal Regularization.

    PubMed

    Hong, Xia; Chen, Sheng; Gao, Junbin; Harris, Chris J

    2015-12-01

    An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

  7. Inverse problems: Fuzzy representation of uncertainty generates a regularization

    NASA Technical Reports Server (NTRS)

    Kreinovich, V.; Chang, Ching-Chuang; Reznik, L.; Solopchenko, G. N.

    1992-01-01

    In many applied problems (geophysics, medicine, and astronomy) we cannot directly measure the values x(t) of the desired physical quantity x in different moments of time, so we measure some related quantity y(t), and then we try to reconstruct the desired values x(t). This problem is often ill-posed in the sense that two essentially different functions x(t) are consistent with the same measurement results. So, in order to get a reasonable reconstruction, we must have some additional prior information about the desired function x(t). Methods that use this information to choose x(t) from the set of all possible solutions are called regularization methods. In some cases, we know the statistical characteristics both of x(t) and of the measurement errors, so we can apply statistical filtering methods (well-developed since the invention of a Wiener filter). In some situations, we know the properties of the desired process, e.g., we know that the derivative of x(t) is limited by some number delta, etc. In this case, we can apply standard regularization techniques (e.g., Tikhonov's regularization). In many cases, however, we have only uncertain knowledge about the values of x(t), about the rate with which the values of x(t) can change, and about the measurement errors. In these cases, usually one of the existing regularization methods is applied. There exist several heuristics that choose such a method. The problem with these heuristics is that they often lead to choosing different methods, and these methods lead to different functions x(t). Therefore, the results x(t) of applying these heuristic methods are often unreliable. We show that if we use fuzzy logic to describe this uncertainty, then we automatically arrive at a unique regularization method, whose parameters are uniquely determined by the experts knowledge. Although we start with the fuzzy description, but the resulting regularization turns out to be quite crisp.

  8. 20 CFR 220.17 - Recovery from disability for work in the regular occupation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Work in an Employee's Regular Railroad Occupation § 220.17 Recovery from disability for work in the regular occupation. (a) General. Disability for work in the regular occupation will end if— (1) There is... the duties of his or her regular occupation. The Board provides a trial work period before...

  9. 20 CFR 220.17 - Recovery from disability for work in the regular occupation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Work in an Employee's Regular Railroad Occupation § 220.17 Recovery from disability for work in the regular occupation. (a) General. Disability for work in the regular occupation will end if— (1) There is... the duties of his or her regular occupation. The Board provides a trial work period before...

  10. 20 CFR 216.16 - What is regular non-railroad employment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false What is regular non-railroad employment. 216... regular non-railroad employment. (a) Regular non-railroad employment is full or part-time employment for pay. (b) Regular non-railroad employment does not include any of the following: (1)...

  11. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  12. Regular Expression-Based Learning for METs Value Extraction

    PubMed Central

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E.; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673

  13. Regular Expression-Based Learning for METs Value Extraction.

    PubMed

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians.

  14. Regular Expression-Based Learning for METs Value Extraction.

    PubMed

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673

  15. Total Variation Regularization Used in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Wang, Huaxiang; Tang, Lei

    2007-06-01

    To solve ill-posed problem and poor resolution in electrical capacitance tomography (ECT), a new image reconstruction algorithm based on total variation (TV) regularization is proposed and a new self-adaptive mesh refinement strategy is put forward. Compared with the conventional Tikhonov regularization, this new algorithm not only stabilizes the reconstruction, but also enhances the distinguishability of the reconstruction image in areas with discontinuous medium distribution. It possesses a good edge-preserving property. The self-adaptive mesh generation technique based on this algorithm can refine the mesh automatically in specific areas according to medium distribution. This strategy keeps high resolution as refining all elements over the region but reduces calculation loads, thereby speeds up the reconstruction. Both simulation and experimental results show that this algorithm has advantages in terms of the resolution and real-time performance.

  16. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  17. Vertical Accretion in Microtidal Regularly and Irregularly Flooded Estuarine Marshes

    NASA Astrophysics Data System (ADS)

    Craft, C. B.; Seneca, E. D.; Broome, S. W.

    1993-10-01

    Vertical accretion rates were measured in microtidal (tidal amplitude less than 0·3 m) regularly (flooded twice daily by the astronomical tides), and irregularly flooded (inundated only during spring and storm tides) estuarine marshes in North Carolina to determine whether these marshes are keeping pace with rising sea-level and to quantify the relative contribution of organic matter and mineral sediment to vertical growth. Accretion rates in streamside and backmarsh locations of each marsh were determined by measuring the Cesium-137 ( 137Cs) activity in 2 cm soil depth increments. Soil bulk density, organic carbon (C), total nitrogen (N) and particle density also were measured to estimate rates of accumulation of organic matter (OM), mineral sediment and nutrients. With the exception of the backmarsh location of the regularly flooded marsh, vertical accretion rates in the marshes studied matched or exceeded the recent (1940-80) rate of sea-level rise (1·9 mm year -1) along the North Carolina coast. Accretion rates in the irregularly flooded marsh averaged 3·6 ± 0·5 mm year -1 along the streamside and 2·4 ± 0·2 mm year -1 in the backmarsh. The regularly flooded marsh had lower accretion rates, averaging 2·7 ± 0·3 mm year -1 along the streamside and 0·9 ± 0·2 mm year -1 in the backmarsh. Vertical accretion in the irregularly flooded marsh occurred via in situ production and accumulation of organic matter. Rates of soil OM (196-280 g m -2 year -1), organic C (106-146 g m -2 year -1) and total N (6·9-10·3 g m -2 year -1) accumulation were much higher in the irregularly flooded marsh as compared to the regularly flooded marsh (OM = 51-137 g m -2 year -1, C = 21-59 g m -2 year -1, N = 1·3-4·1 g m -2 year -1). In contrast, vertical accretion in the regularly flooded marsh was sustained by allochthonous inputs of mineral sediment. Inorganic sediment deposition contributed 677-1139 g m -2 year -1 mineral matter to the regularly flooded marsh as compared

  18. Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation.

    PubMed

    Meng, Juan; Hu, Guyu; Li, Dong; Zhang, Yanyan; Pan, Zhisong

    2016-01-01

    Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.

  19. Resolving intravoxel fiber architecture using nonconvex regularized blind compressed sensing

    NASA Astrophysics Data System (ADS)

    Chu, C. Y.; Huang, J. P.; Sun, C. Y.; Liu, W. Y.; Zhu, Y. M.

    2015-03-01

    In diffusion magnetic resonance imaging, accurate and reliable estimation of intravoxel fiber architectures is a major prerequisite for tractography algorithms or any other derived statistical analysis. Several methods have been proposed that estimate intravoxel fiber architectures using low angular resolution acquisitions owing to their shorter acquisition time and relatively low b-values. But these methods are highly sensitive to noise. In this work, we propose a nonconvex regularized blind compressed sensing approach to estimate intravoxel fiber architectures in low angular resolution acquisitions. The method models diffusion-weighted (DW) signals as a sparse linear combination of unfixed reconstruction basis functions and introduces a nonconvex regularizer to enhance the noise immunity. We present a general solving framework to simultaneously estimate the sparse coefficients and the reconstruction basis. Experiments on synthetic, phantom, and real human brain DW images demonstrate the superiority of the proposed approach.

  20. Deconvolution of axisymmetric flame properties using Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Daun, Kyle J.; Thomson, Kevin A.; Liu, Fengshan; Smallwood, Greg J.

    2006-07-01

    We present a method based on Tikhonov regularization for solving one-dimensional inverse tomography problems that arise in combustion applications. In this technique, Tikhonov regularization transforms the ill-conditioned set of equations generated by onion-peeling deconvolution into a well-conditioned set that is less susceptible to measurement errors that arise in experimental settings. The performance of this method is compared to that of onion-peeling and Abel three-point deconvolution by solving for a known field variable distribution from projected data contaminated with an artificially generated error. The results show that Tikhonov deconvolution provides a more accurate field distribution than onion-peeling and Abel three-point deconvolution and is more stable than the other two methods as the distance between projected data points decreases.

  1. Regular Expressions at Their Best: A Case for Rational Design

    NASA Astrophysics Data System (ADS)

    Le Maout, Vincent

    Regular expressions are often an integral part of program customization and many algorithms have been proposed for transforming them into suitable data structures. These algorithms can be divided into two main classes: backtracking or automaton-based algorithms. Surprisingly, the latter class draws less attention than the former, even though automaton-based algorithms represent the oldest and by far the fastest solutions when carefully designed. Only two open-source automaton-based implementations stand out: PCRE and the recent RE2 from Google. We have developed, and present here, a competitive automaton-based regular expression engine on top of the LGPL C++ Automata Standard Template Library (ASTL), whose efficiency and scalability remain unmatched and which distinguishes itself through a unique and rigorous STL-like design.

  2. Mechanisms of evolution of avalanches in regular graphs.

    PubMed

    Handford, Thomas P; Pérez-Reche, Francisco J; Taraskin, Sergei N

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  3. Mechanisms of evolution of avalanches in regular graphs

    NASA Astrophysics Data System (ADS)

    Handford, Thomas P.; Pérez-Reche, Francisco J.; Taraskin, Sergei N.

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  4. The effect of spacing regularity on visual crowding.

    PubMed

    Saarela, T P; Westheimer, G; Herzog, M H

    2010-08-18

    Crowding limits peripheral visual discrimination and recognition: a target easily identified in isolation becomes impossible to recognize when surrounded by other stimuli, often called flankers. Most accounts of crowding predict less crowding when the target-flanker distance increases. On the other hand, the importance of perceptual organization and target-flanker coherence in crowding has recently received more attention. We investigated the effect of target-flanker spacing on crowding in multi-element stimulus arrays. We show that increasing the average distance between the target and the flankers does not always decrease the amount of crowding but can even sometimes increase it. We suggest that the regularity of inter-element spacing plays an important role in determining the strength of crowding: regular spacing leads to the perception of a single, coherent, texture-like stimulus, making judgments about the individual elements difficult.

  5. Tikhonov regularization-based operational transfer path analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Wei; Lu, Yingying; Zhang, Zhousuo

    2016-06-01

    To overcome ill-posed problems in operational transfer path analysis (OTPA), and improve the stability of solutions, this paper proposes a novel OTPA based on Tikhonov regularization, which considers both fitting degrees and stability of solutions. Firstly, fundamental theory of Tikhonov regularization-based OTPA is presented, and comparative studies are provided to validate the effectiveness on ill-posed problems. Secondly, transfer path analysis and source contribution evaluations for numerical cases studies on spherical radiating acoustical sources are comparatively studied. Finally, transfer path analysis and source contribution evaluations for experimental case studies on a test bed with thin shell structures are provided. This study provides more accurate transfer path analysis for mechanical systems, which can benefit for vibration reduction by structural path optimization. Furthermore, with accurate evaluation of source contributions, vibration monitoring and control by active controlling vibration sources can be effectively carried out.

  6. Statistical regularities in the rank-citation profile of scientists

    PubMed Central

    Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress. PMID:22355696

  7. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  8. Regularizing the r-mode Problem for Nonbarotropic Relativistic Stars

    NASA Technical Reports Server (NTRS)

    Lockitch, Keith H.; Andersson, Nils; Watts, Anna L.

    2004-01-01

    We present results for r-modes of relativistic nonbarotropic stars. We show that the main differential equation, which is formally singular at lowest order in the slow-rotation expansion, can be regularized if one considers the initial value problem rather than the normal mode problem. However, a more physically motivated way to regularize the problem is to include higher order terms. This allows us to develop a practical approach for solving the problem and we provide results that support earlier conclusions obtained for uniform density stars. In particular, we show that there will exist a single r-mode for each permissible combination of 1 and m. We discuss these results and provide some caveats regarding their usefulness for estimates of gravitational-radiation reaction timescales. The close connection between the seemingly singular relativistic r-mode problem and issues arising because of the presence of co-rotation points in differentially rotating stars is also clarified.

  9. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    NASA Astrophysics Data System (ADS)

    Hopper, Christopher P.

    2016-10-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  10. Total variation regularization for bioluminescence tomography with the split Bregman method.

    PubMed

    Feng, Jinchao; Qin, Chenghu; Jia, Kebin; Zhu, Shouping; Liu, Kai; Han, Dong; Yang, Xin; Gao, Quansheng; Tian, Jie

    2012-07-01

    Regularization methods have been broadly applied to bioluminescence tomography (BLT) to obtain stable solutions, including l2 and l1 regularizations. However, l2 regularization can oversmooth reconstructed images and l1 regularization may sparsify the source distribution, which degrades image quality. In this paper, the use of total variation (TV) regularization in BLT is investigated. Since a nonnegativity constraint can lead to improved image quality, the nonnegative constraint should be considered in BLT. However, TV regularization with a nonnegativity constraint is extremely difficult to solve due to its nondifferentiability and nonlinearity. The aim of this work is to validate the split Bregman method to minimize the TV regularization problem with a nonnegativity constraint for BLT. The performance of split Bregman-resolved TV (SBRTV) based BLT reconstruction algorithm was verified with numerical and in vivo experiments. Experimental results demonstrate that the SBRTV regularization can provide better regularization quality over l2 and l1 regularizations.

  11. Perfect state transfer over distance-regular spin networks

    SciTech Connect

    Jafarizadeh, M. A.; Sufiani, R.

    2008-02-15

    Christandl et al. have noted that the d-dimensional hypercube can be projected to a linear chain with d+1 sites so that, by considering fixed but different couplings between the qubits assigned to the sites, the perfect state transfer (PST) can be achieved over arbitrarily long distances in the chain [Phys. Rev. Lett. 92, 187902 (2004); Phys. Rev. A 71, 032312 (2005)]. In this work we consider distance-regular graphs as spin networks and note that any such network (not just the hypercube) can be projected to a linear chain and so can allow PST over long distances. We consider some particular spin Hamiltonians which are the extended version of those of Christandl et al. Then, by using techniques such as stratification of distance-regular graphs and spectral analysis methods, we give a procedure for finding a set of coupling constants in the Hamiltonians so that a particular state initially encoded on one site will evolve freely to the opposite site without any dynamical control, i.e., we show how to derive the parameters of the system so that PST can be achieved. It is seen that PST is only allowed in distance-regular spin networks for which, starting from an arbitrary vertex as reference vertex (prepared in the initial state which we wish to transfer), the last stratum of the networks with respect to the reference state contains only one vertex; i.e., stratification of these networks plays an important role which determines in which kinds of networks and between which vertices of them, PST can be allowed. As examples, the cycle network with even number of vertices and d-dimensional hypercube are considered in details and the method is applied for some important distance-regular networks.

  12. Knowing More than One Can Say: The Early Regular Plural

    ERIC Educational Resources Information Center

    Zapf, Jennifer A.; Smith, Linda B.

    2009-01-01

    This paper reports on partial knowledge in two-year-old children's learning of the regular English plural. In Experiments 1 and 2, children were presented with one kind and its label and then were either presented with two of that same kind (A[right arrow]AA) or the initial picture next to a very different thing (A[right arrow]AB). The children in…

  13. Manifestly scale-invariant regularization and quantum effective operators

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2016-05-01

    Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.

  14. Visual Mismatch Negativity Reveals Automatic Detection of Sequential Regularity Violation

    PubMed Central

    Stefanics, Gábor; Kimura, Motohiro; Czigler, István

    2011-01-01

    Sequential regularities are abstract rules based on repeating sequences of environmental events, which are useful to make predictions about future events. Here, we tested whether the visual system is capable to detect sequential regularity in unattended stimulus sequences. The visual mismatch negativity (vMMN) component of the event-related potentials is sensitive to the violation of complex regularities (e.g., object-related characteristics, temporal patterns). We used the vMMN component as an index of violation of conditional (if, then) regularities. In the first experiment, to investigate emergence of vMMN and other change-related activity to the violation of conditional rules, red and green disk patterns were delivered in pairs. The majority of pairs comprised of disk patterns with identical colors, whereas in deviant pairs the colors were different. The probabilities of the two colors were equal. The second member of the deviant pairs elicited a vMMN with longer latency and more extended spatial distribution to deviants with lower probability (10 vs. 30%). In the second (control) experiment the emergence of vMMN to violation of a simple, feature-related rule was studied using oddball sequences of stimulus pairs where deviant colors were presented with 20% probabilities. Deviant colored patterns elicited a vMMN, and this component was larger for the second member of the pair, i.e., after a shorter inter-stimulus interval. This result corresponds to the SOA/(v)MMN relationship, expected on the basis of a memory-mismatch process. Our results show that the system underlying vMMN is sensitive to abstract, conditional rules. Representation of such rules implicates expectation of a subsequent event, therefore vMMN can be considered as a correlate of violated predictions about the characteristics of environmental events. PMID:21629766

  15. Effects of regular exercise training on skeletal muscle contractile function

    NASA Technical Reports Server (NTRS)

    Fitts, Robert H.

    2003-01-01

    Skeletal muscle function is critical to movement and one's ability to perform daily tasks, such as eating and walking. One objective of this article is to review the contractile properties of fast and slow skeletal muscle and single fibers, with particular emphasis on the cellular events that control or rate limit the important mechanical properties. Another important goal of this article is to present the current understanding of how the contractile properties of limb skeletal muscle adapt to programs of regular exercise.

  16. Holographic Wilson loops, Hamilton-Jacobi equation, and regularizations

    NASA Astrophysics Data System (ADS)

    Pontello, Diego; Trinchero, Roberto

    2016-04-01

    The minimal area for surfaces whose borders are rectangular and circular loops are calculated using the Hamilton-Jacobi (HJ) equation. This amounts to solving the HJ equation for the value of the minimal area, without calculating the shape of the corresponding surface. This is done for bulk geometries that are asymptotically anti-de Sitter (AdS). For the rectangular contour, the HJ equation, which is separable, can be solved exactly. For the circular contour an expansion in powers of the radius is implemented. The HJ approach naturally leads to a regularization which consists in locating the contour away from the border. The results are compared with the ɛ -regularization which leaves the contour at the border and calculates the area of the corresponding minimal surface up to a diameter smaller than the one of the contour at the border. The results for the circular loop do not coincide if the expansion parameter is taken to be the radius of the contour at the border. It is shown that using this expansion parameter the ɛ -regularization leads to incorrect results for certain solvable non-AdS cases. However, if the expansion parameter is taken to be the radius of the minimal surface whose area is computed, then the results coincide with the HJ scheme. This is traced back to the fact that in the HJ case the expansion parameter for the area of a minimal surface is intrinsic to the surface; however, the radius of the contour at the border is related to the way one chooses to regularize in the ɛ -scheme the calculation of this area.

  17. Nonlinear Regularizing Effect for Hyperbolic Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Golse, François

    2010-03-01

    The Tartar-DiPerna compensated compactness method, used initially to construct global weak solutions of hyperbolic systems of conservation laws for large data, can be adapted in order to provide some regularity estimates on these solutions. This note treats two examples: (a) the case of scalar conservation laws with convex flux, and (b) the Euler system for a polytropic, compressible fluid, in space dimension one.

  18. [Iterated Tikhonov Regularization for Spectral Recovery from Tristimulus].

    PubMed

    Xie, De-hong; Li, Rui; Wan, Xiao-xia; Liu, Qiang; Zhu, Wen-feng

    2016-01-01

    Reflective spectra in a multispectral image can objectively and originally represent color information due to their high dimensionality, illuminant independent and device independent. Aiming to the problem of loss of spectral information when the spectral data reconstructed from three-dimensional colorimetric data in the trichromatic camera-based spectral image acquisition system and its subsequent problem of loss of color information, this work proposes an iterated Tikhonov regularization to reconstruct the reflectance spectra. First of all, according to relationship between the colorimetric value and the reflective spectra in the colorimetric theory, this work constructs a spectral reconstruction equation which can reconstruct high dimensional spectral data from three dimensional colorimetric data acquired by the trichromatic camera. Then, the iterated Tikhonov regularization, inspired by the idea of the pseudo inverse Moore-Penrose, is used to cope with the linear ill-posed inverse problem during solving the equation of reconstructing reflectance spectra. Meanwhile, the work also uses the L-curve method to obtain an optimal regularized parameter of the iterated Tikhonov regularization by training a set of samples. Through these methods, the ill condition of the spectral reconstruction equation can be effectively controlled and improved, and subsequently loss of spectral information of the reconstructed spectral data can be reduced. The verification experiment is performed under another set of training samples. The experimental results show that the proposed method reconstructs the reflective spectra with less spectral information loss in the trichromatic camera-based spectral image acquisition system, which reflects in obvious decreases of spectral errors and colorimetric errors compared with the previous method.

  19. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments.

  20. Comparison of regularization methods for human cardiac diffusion tensor MRI.

    PubMed

    Frindel, Carole; Robini, Marc; Croisille, Pierre; Zhu, Yue-Min

    2009-06-01

    Diffusion tensor MRI (DT-MRI) is an imaging technique that is gaining importance in clinical applications. However, there is very little work concerning the human heart. When applying DT-MRI to in vivo human hearts, the data have to be acquired rapidly to minimize artefacts due to cardiac and respiratory motion and to improve patient comfort, often at the expense of image quality. This results in diffusion weighted (DW) images corrupted by noise, which can have a significant impact on the shape and orientation of tensors and leads to diffusion tensor (DT) datasets that are not suitable for fibre tracking. This paper compares regularization approaches that operate either on diffusion weighted images or on diffusion tensors. Experiments on synthetic data show that, for high signal-to-noise ratio (SNR), the methods operating on DW images produce the best results; they substantially reduce noise error propagation throughout the diffusion calculations. However, when the SNR is low, Rician Cholesky and Log-Euclidean DT regularization methods handle the bias introduced by Rician noise and ensure symmetry and positive definiteness of the tensors. Results based on a set of sixteen ex vivo human hearts show that the different regularization methods tend to provide equivalent results. PMID:19356971

  1. Channeling power across ecological systems: social regularities in community organizing.

    PubMed

    Christens, Brian D; Inzeo, Paula Tran; Faust, Victoria

    2014-06-01

    Relational and social network perspectives provide opportunities for more holistic conceptualizations of phenomena of interest in community psychology, including power and empowerment. In this article, we apply these tools to build on multilevel frameworks of empowerment by proposing that networks of relationships between individuals constitute the connective spaces between ecological systems. Drawing on an example of a model for grassroots community organizing practiced by WISDOM—a statewide federation supporting local community organizing initiatives in Wisconsin—we identify social regularities (i.e., relational and temporal patterns) that promote empowerment and the development and exercise of social power through building and altering relational ties. Through an emphasis on listening-focused one-to-one meetings, reflection, and social analysis, WISDOM organizing initiatives construct and reinforce social regularities that develop social power in the organizing initiatives and advance psychological empowerment among participant leaders in organizing. These patterns are established by organizationally driven brokerage and mobilization of interpersonal ties, some of which span ecological systems.Hence, elements of these power-focused social regularities can be conceptualized as cross-system channels through which micro-level empowerment processes feed into macro-level exercise of social power, and vice versa. We describe examples of these channels in action, and offer recommendations for theory and design of future action research [corrected] .

  2. Hessian-Regularized Co-Training for Social Activity Recognition

    PubMed Central

    Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang

    2014-01-01

    Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms. PMID:25259945

  3. Nonrigid registration using regularization that accomodates local tissue rigidity

    NASA Astrophysics Data System (ADS)

    Ruan, Dan; Fessler, Jeffrey A.; Roberson, Michael; Balter, James; Kessler, Marc

    2006-03-01

    Regularized nonrigid medical image registration algorithms usually estimate the deformation by minimizing a cost function, consisting of a similarity measure and a penalty term that discourages "unreasonable" deformations. Conventional regularization methods enforce homogeneous smoothness properties of the deformation field; less work has been done to incorporate tissue-type-specific elasticity information. Yet ignoring the elasticity differences between tissue types can result in non-physical results, such as bone warping. Bone structures should move rigidly (locally), unlike the more elastic deformation of soft issues. Existing solutions for this problem either treat different regions of an image independently, which requires precise segmentation and incurs boundary issues; or use an empirical spatial varying "filter" to "correct" the deformation field, which requires the knowledge of a stiffness map and departs from the cost-function formulation. We propose a new approach to incorporate tissue rigidity information into the nonrigid registration problem, by developing a space variant regularization function that encourages the local Jacobian of the deformation to be a nearly orthogonal matrix in rigid image regions, while allowing more elastic deformations elsewhere. For the case of X-ray CT data, we use a simple monotonic increasing function of the CT numbers (in HU) as a "rigidity index" since bones typically have the highest CT numbers. Unlike segmentation-based methods, this approach is flexible enough to account for partial volume effects. Results using a B-spline deformation parameterization illustrate that the proposed approach improves registration accuracy in inhale-exhale CT scans with minimal computational penalty.

  4. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments. PMID:26363844

  5. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  6. Interior Regularity Estimates in High Conductivity Homogenization and Application

    NASA Astrophysics Data System (ADS)

    Briane, Marc; Capdeboscq, Yves; Nguyen, Luc

    2013-01-01

    In this paper, uniform pointwise regularity estimates for the solutions of conductivity equations are obtained in a unit conductivity medium reinforced by an ɛ-periodic lattice of highly conducting thin rods. The estimates are derived only at a distance ɛ 1+ τ (for some τ > 0) away from the fibres. This distance constraint is rather sharp since the gradients of the solutions are shown to be unbounded locally in L p as soon as p > 2. One key ingredient is the derivation in dimension two of regularity estimates to the solutions of the equations deduced from a Fourier series expansion with respect to the fibres' direction, and weighted by the high-contrast conductivity. The dependence on powers of ɛ of these two-dimensional estimates is shown to be sharp. The initial motivation for this work comes from imaging, and enhanced resolution phenomena observed experimentally in the presence of micro-structures (L erosey et al., Science 315:1120-1124, 2007). We use these regularity estimates to characterize the signature of low volume fraction heterogeneities in the fibred reinforced medium, assuming that the heterogeneities stay at a distance ɛ 1+ τ away from the fibres.

  7. Impact on asteroseismic analyses of regular gaps in Kepler data

    NASA Astrophysics Data System (ADS)

    García, R. A.; Mathur, S.; Pires, S.; Régulo, C.; Bellamy, B.; Pallé, P. L.; Ballot, J.; Barceló Forteza, S.; Beck, P. G.; Bedding, T. R.; Ceillier, T.; Roca Cortés, T.; Salabert, D.; Stello, D.

    2014-08-01

    Context. The NASA Kepler mission has observed more than 190 000 stars in the constellations of Cygnus and Lyra. Around 4 years of almost continuous ultra high-precision photometry have been obtained reaching a duty cycle higher than 90% for many of these stars. However, almost regular gaps due to nominal operations are present in the light curves on different time scales. Aims: In this paper we want to highlight the impact of those regular gaps in asteroseismic analyses, and we try to find a method that minimizes their effect on the frequency domain. Methods: To do so, we isolate the two main time scales of quasi regular gaps in the data. We then interpolate the gaps and compare the power density spectra of four different stars: two red giants at different stages of their evolution, a young F-type star, and a classical pulsator in the instability strip. Results: The spectra obtained after filling the gaps in the selected solar-like stars show a net reduction in the overall background level, as well as a change in the background parameters. The inferred convective properties could change as much as ~200% in the selected example, introducing a bias in the p-mode frequency of maximum power. When asteroseismic scaling relations are used, this bias can lead to a variation in the surface gravity of 0.05 dex. Finally, the oscillation spectrum in the classical pulsator is cleaner than the original one.

  8. Regularity and predictability of human mobility in personal space.

    PubMed

    Austin, Daniel; Cross, Robin M; Hayes, Tamara; Kaye, Jeffrey

    2014-01-01

    Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity) is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.

  9. Manifold regularized multitask feature learning for multimodality disease classification.

    PubMed

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-02-01

    Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. PMID:25277605

  10. Identification and sorting of regular textures according to their similarity

    NASA Astrophysics Data System (ADS)

    Hernández Mesa, Pilar; Anastasiadis, Johannes; Puente León, Fernando

    2015-05-01

    Regardless whether mosaics, material surfaces or skin surfaces are inspected their texture plays an important role. Texture is a property which is hard to describe using words but it can easily be described in pictures. Furthermore, a huge amount of digital images containing a visual description of textures already exists. However, this information becomes useless if there are no appropriate methods to browse the data. In addition, depending on the given task some properties like scale, rotation or intensity invariance are desired. In this paper we propose to analyze texture images according to their characteristic pattern. First a classification approach is proposed to separate regular from non-regular textures. The second stage will focus on regular textures suggesting a method to sort them according to their similarity. Different features will be extracted from the texture in order to describe its scale, orientation, texel and the texel's relative position. Depending on the desired invariance of the visual characteristics (like the texture's scale or the texel's form invariance) the comparison of the features between images will be weighted and combined to define the degree of similarity between them. Tuning the weighting parameters allows this search algorithm to be easily adapted to the requirements of the desired task. Not only the total invariance of desired parameters can be adjusted, the weighting of the parameters may also be modified to adapt to an application-specific type of similarity. This search method has been evaluated using different textures and similarity criteria achieving very promising results.

  11. Isotropic model for cluster growth on a regular lattice

    NASA Astrophysics Data System (ADS)

    Yates, Christian A.; Baker, Ruth E.

    2013-08-01

    There exists a plethora of mathematical models for cluster growth and/or aggregation on regular lattices. Almost all suffer from inherent anisotropy caused by the regular lattice upon which they are grown. We analyze the little-known model for stochastic cluster growth on a regular lattice first introduced by Ferreira Jr. and Alves [J. Stat. Mech. Theo. & Exp.1742-546810.1088/1742-5468/2006/11/P11007 (2006) P11007], which produces circular clusters with no discernible anisotropy. We demonstrate that even in the noise-reduced limit the clusters remain circular. We adapt the model by introducing a specific rearrangement algorithm so that, rather than adding elements to the cluster from the outside (corresponding to apical growth), our model uses mitosis-like cell splitting events to increase the cluster size. We analyze the surface scaling properties of our model and compare it to the behavior of more traditional models. In “1+1” dimensions we discover and explore a new, nonmonotonic surface thickness scaling relationship which differs significantly from the Family-Vicsek scaling relationship. This suggests that, for models whose clusters do not grow through particle additions which are solely dependent on surface considerations, the traditional classification into “universality classes” may not be appropriate.

  12. Invariant regularization of anomaly-free chiral theories

    NASA Astrophysics Data System (ADS)

    Chang, Lay Nam; Soo, Chopin

    1997-02-01

    We present a generalization of the Frolov-Slavnov invariant regularization scheme for chiral fermion theories in curved spacetimes. The Lagrangian level regularization is explicitly invariant under all the local gauge symmetries of the theory, including local Lorentz invariance. The perturbative scheme works for arbitrary representations which satisfy the chiral gauge anomaly and the mixed Lorentz-gauge anomaly cancellation conditions. Anomalous theories on the other hand manifest themselves by having divergent fermion loops which remain unregularized by the scheme. Since the invariant scheme is promoted to include also local Lorentz invariance, spectator fields which do not couple to gravity cannot be, and are not, introduced. Furthermore, the scheme is truly chiral (Weyl) in that all fields, including the regulators, are left handed; and only the left-handed spin connection is needed. The scheme is, therefore, well suited for the study of the interaction of matter with all four known forces in a completely chiral fashion. In contrast with the vectorlike formulation, the degeneracy between the Adler-Bell-Jackiw current and the fermion number current in the bare action is preserved by the chiral regularization scheme.

  13. 1200 years of regular outbreaks in alpine insects

    PubMed Central

    Esper, Jan; Büntgen, Ulf; Frank, David C; Nievergelt, Daniel; Liebhold, Andrew

    2006-01-01

    The long-term history of Zeiraphera diniana Gn. (the larch budmoth, LBM) outbreaks was reconstructed from tree rings of host subalpine larch in the European Alps. This record was derived from 47 513 maximum latewood density measurements, and highlights the impact of contemporary climate change on ecological disturbance regimes. With over 1000 generations represented, this is the longest annually resolved record of herbivore population dynamics, and our analysis demonstrates that remarkably regular LBM fluctuations persisted over the past 1173 years with population peaks averaging every 9.3 years. These regular abundance oscillations recurred until 1981, with the absence of peak events during recent decades. Comparison with an annually resolved, millennium-long temperature reconstruction representative for the European Alps (r=0.72, correlation with instrumental data) demonstrates that regular insect population cycles continued despite major climatic changes related to warming during medieval times and cooling during the Little Ice Age. The late twentieth century absence of LBM outbreaks, however, corresponds to a period of regional warmth that is exceptional with respect to the last 1000+ years, suggesting vulnerability of an otherwise stable ecological system in a warming environment. PMID:17254991

  14. Path integral regularization of pure Yang-Mills theory

    SciTech Connect

    Jacquot, J. L.

    2009-07-15

    In enlarging the field content of pure Yang-Mills theory to a cutoff dependent matrix valued complex scalar field, we construct a vectorial operator, which is by definition invariant with respect to the gauge transformation of the Yang-Mills field and with respect to a Stueckelberg type gauge transformation of the scalar field. This invariant operator converges to the original Yang-Mills field as the cutoff goes to infinity. With the help of cutoff functions, we construct with this invariant a regularized action for the pure Yang-Mills theory. In order to be able to define both the gauge and scalar fields kinetic terms, other invariant terms are added to the action. Since the scalar fields flat measure is invariant under the Stueckelberg type gauge transformation, we obtain a regularized gauge-invariant path integral for pure Yang-Mills theory that is mathematically well defined. Moreover, the regularized Ward-Takahashi identities describing the dynamics of the gauge fields are exactly the same as the formal Ward-Takahashi identities of the unregularized theory.

  15. Manifold regularized non-negative matrix factorization with label information

    NASA Astrophysics Data System (ADS)

    Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin

    2016-03-01

    Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.

  16. Matter conditions for regular black holes in f (T ) gravity

    NASA Astrophysics Data System (ADS)

    Aftergood, Joshua; DeBenedictis, Andrew

    2014-12-01

    We study the conditions imposed on matter to produce a regular (nonsingular) interior of a class of spherically symmetric black holes in the f (T ) extension of teleparallel gravity. The class of black holes studied (T spheres) is necessarily singular in general relativity. We derive a tetrad which is compatible with the black hole interior and utilize this tetrad in the gravitational equations of motion to study the black hole interior. It is shown that in the case where the gravitational Lagrangian is expandable in a power series f (T )=T +∑ n ≠1 bnTn black holes can be nonsingular while respecting certain energy conditions in the matter fields. Thus, the black hole singularity may be removed, and the gravitational equations of motion can remain valid throughout the manifold. This is true as long as n is positive but is not true in the negative sector of the theory. Hence, gravitational f (T ) Lagrangians which are Taylor expandable in powers of T may yield regular black holes of this type. Although it is found that these black holes can be rendered nonsingular in f (T ) theory, we conjecture that a mild singularity theorem holds in that the dominant energy condition is violated in an arbitrarily small neighborhood of the general relativity singular point if the corresponding f (T ) black hole is regular. The analytic techniques here can also be applied to gravitational Lagrangians which are not Laurent or Taylor expandable.

  17. Regular biorthogonal pairs and pseudo-bosonic operators

    NASA Astrophysics Data System (ADS)

    Inoue, H.; Takakura, M.

    2016-08-01

    The first purpose of this paper is to show a method of constructing a regular biorthogonal pair based on the commutation rule: ab - ba = I for a pair of operators a and b acting on a Hilbert space H with inner product (ṡ| ṡ ). Here, sequences {ϕn} and {ψn} in a Hilbert space H are biorthogonal if (ϕn|ψm) = δnm, n, m = 0, 1, …, and they are regular if both Dϕ ≡ Span{ϕn} and Dψ ≡ Span{ψn} are dense in H . Indeed, the assumptions to construct the regular biorthogonal pair coincide with the definition of pseudo-bosons as originally given in F. Bagarello ["Pseudobosons, Riesz bases, and coherent states," J. Math. Phys. 51, 023531 (2010)]. Furthermore, we study the connections between the pseudo-bosonic operators a, b, a†, b† and the pseudo-bosonic operators defined by a regular biorthogonal pair ({ϕn}, {ψn}) and an ONB e of H in H. Inoue ["General theory of regular biorthogonal pairs and its physical applications," e-print arXiv:math-ph/1604.01967]. The second purpose is to define and study the notion of D -pseudo-bosons in F. Bagarello ["More mathematics for pseudo-bosons," J. Math. Phys. 54, 063512 (2013)] and F. Bagarello ["From self-adjoint to non self-adjoint harmonic oscillators: Physical consequences and mathematical pitfalls," Phys. Rev. A 88, 032120 (2013)] and give a method of constructing D -pseudo-bosons on some steps. Then it is shown that for any ONB e = {en} in H and any operators T and T-1 in L † ( D ) , we may construct operators A and B satisfying D -pseudo bosons, where D is a dense subspace in a Hilbert space H and L † ( D ) the set of all linear operators T from D to D such that T * D ⊂ D , where T* is the adjoint of T. Finally, we give some physical examples of D -pseudo-bosons based on standard bosons by the method of constructing D -pseudo-bosons stated above.

  18. Auditory feedback in error-based learning of motor regularity.

    PubMed

    van Vugt, Floris T; Tillmann, Barbara

    2015-05-01

    Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications.

  19. Quantification of fetal heart rate regularity using symbolic dynamics

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.

    2007-03-01

    Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to

  20. Applying molecular immunohaematology to regularly transfused thalassaemic patients in Thailand

    PubMed Central

    Rujirojindakul, Pairaya; Flegel, Willy A.

    2014-01-01

    Background Red blood cell transfusion is the principal therapy in patients with severe thalassaemias and haemoglobinopathies, which are prevalent in Thailand. Serological red blood cell typing is confounded by chronic transfusion, because of circulating donor red blood cells. We evaluated the concordance of serological phenotypes between a routine and a reference laboratory and with red cell genotyping. Materials and methods Ten consecutive Thai patients with β-thalassemia major who received regular transfusions were enrolled in Thailand. Phenotypes were tested serologically at Songklanagarind Hospital and at the National Institutes of Health. Red blood cell genotyping was performed with commercially available kits and a platform. Results In only three patients was the red cell genotyping concordant with the serological phenotypes for five antithetical antigen pairs in four blood group systems at the two institutions. At the National Institutes of Health, 32 of the 100 serological tests yielded invalid or discrepant results. The positive predictive value of serology did not reach 1 for any blood group system at either of the two institutions in this set of ten patients. Discussion Within this small study, numerous discrepancies were observed between serological phenotypes at the two institutes; red cell genotyping enabled determination of the blood group when serology failed due to transfused red blood cells. We question the utility of serological tests in regularly transfused paediatric patients and propose relying solely on red cell genotyping, which requires training for laboratory personnel and physicians. Red cell genotyping outperformed red cell serology by an order of magnitude in regularly transfused patients. PMID:24120606

  1. Factors associated with regular dental visits among hemodialysis patients

    PubMed Central

    Yoshioka, Masami; Shirayama, Yasuhiko; Imoto, Issei; Hinode, Daisuke; Yanagisawa, Shizuko; Takeuchi, Yuko; Bando, Takashi; Yokota, Narushi

    2016-01-01

    AIM To investigate awareness and attitudes about preventive dental visits among dialysis patients; to clarify the barriers to visiting the dentist. METHODS Subjects included 141 dentate outpatients receiving hemodialysis treatment at two facilities, one with a dental department and the other without a dental department. We used a structured questionnaire to interview participants about their awareness of oral health management issues for dialysis patients, perceived oral symptoms and attitudes about dental visits. Bivariate analysis using the χ2 test was conducted to determine associations between study variables and regular dental check-ups. Binominal logistic regression analysis was used to determine factors associated with regular dental check-ups. RESULTS There were no significant differences in patient demographics between the two participating facilities, including attitudes about dental visits. Therefore, we included all patients in the following analyses. Few patients (4.3%) had been referred to a dentist by a medical doctor or nurse. Although 80.9% of subjects had a primary dentist, only 34.0% of subjects received regular dental check-ups. The most common reasons cited for not seeking dental care were that visits are burdensome and a lack of perceived need. Patients with gum swelling or bleeding were much more likely to be in the group of those not receiving routine dental check-ups (χ2 test, P < 0.01). Logistic regression analysis demonstrated that receiving dental check-ups was associated with awareness that oral health management is more important for dialysis patients than for others and with having a primary dentist (P < 0.05). CONCLUSION Dialysis patients should be educated about the importance of preventive dental care. Medical providers are expected to participate in promoting dental visits among dialysis patients. PMID:27648409

  2. The statistical difference between bending arcs and regular polar arcs

    NASA Astrophysics Data System (ADS)

    Kullen, A.; Fear, R. C.; Milan, S. E.; Carter, J. A.; Karlsson, T.

    2015-12-01

    In this work, the Polar UVI data set by Kullen et al. (2002) of 74 polar arcs is reinvestigated, focusing on bending arcs. Bending arcs are typically faint and form (depending on interplanetary magnetic field (IMF) By direction) on the dawnside or duskside oval with the tip of the arc splitting off the dayside oval. The tip subsequently moves into the polar cap in the antisunward direction, while the arc's nightside end remains attached to the oval, eventually becoming hook-shaped. Our investigation shows that bending arcs appear on the opposite oval side from and farther sunward than most regular polar arcs. They form during By-dominated IMF conditions: typically, the IMF clock angle increases from 60 to 90° about 20 min before the arc forms. Antisunward plasma flows from the oval into the polar cap just poleward of bending arcs are seen in Super Dual Auroral Radar Network data, indicating dayside reconnection. For regular polar arcs, recently reported characteristics are confirmed in contrast to bending arcs. This includes plasma flows along the nightside oval that originate close to the initial arc location and a significant delay in the correlation between IMF By and initial arc location. In our data set, the highest correlations are found with IMF By appearing at least 1-2 h before arc formation. In summary, bending arcs are distinctly different from regular arcs and cannot be explained by existing polar arc models. Instead, these results are consistent with the formation mechanism described in Carter et al. (2015), suggesting that bending arcs are caused by dayside reconnection.

  3. Factors associated with regular dental visits among hemodialysis patients

    PubMed Central

    Yoshioka, Masami; Shirayama, Yasuhiko; Imoto, Issei; Hinode, Daisuke; Yanagisawa, Shizuko; Takeuchi, Yuko; Bando, Takashi; Yokota, Narushi

    2016-01-01

    AIM To investigate awareness and attitudes about preventive dental visits among dialysis patients; to clarify the barriers to visiting the dentist. METHODS Subjects included 141 dentate outpatients receiving hemodialysis treatment at two facilities, one with a dental department and the other without a dental department. We used a structured questionnaire to interview participants about their awareness of oral health management issues for dialysis patients, perceived oral symptoms and attitudes about dental visits. Bivariate analysis using the χ2 test was conducted to determine associations between study variables and regular dental check-ups. Binominal logistic regression analysis was used to determine factors associated with regular dental check-ups. RESULTS There were no significant differences in patient demographics between the two participating facilities, including attitudes about dental visits. Therefore, we included all patients in the following analyses. Few patients (4.3%) had been referred to a dentist by a medical doctor or nurse. Although 80.9% of subjects had a primary dentist, only 34.0% of subjects received regular dental check-ups. The most common reasons cited for not seeking dental care were that visits are burdensome and a lack of perceived need. Patients with gum swelling or bleeding were much more likely to be in the group of those not receiving routine dental check-ups (χ2 test, P < 0.01). Logistic regression analysis demonstrated that receiving dental check-ups was associated with awareness that oral health management is more important for dialysis patients than for others and with having a primary dentist (P < 0.05). CONCLUSION Dialysis patients should be educated about the importance of preventive dental care. Medical providers are expected to participate in promoting dental visits among dialysis patients.

  4. Lipschitz Regularity for Elliptic Equations with Random Coefficients

    NASA Astrophysics Data System (ADS)

    Armstrong, Scott N.; Mourrat, Jean-Christophe

    2016-01-01

    We develop a higher regularity theory for general quasilinear elliptic equations and systems in divergence form with random coefficients. The main result is a large-scale L ∞-type estimate for the gradient of a solution. The estimate is proved with optimal stochastic integrability under a one-parameter family of mixing assumptions, allowing for very weak mixing with non-integrable correlations to very strong mixing (for example finite range of dependence). We also prove a quenched L 2 estimate for the error in homogenization of Dirichlet problems. The approach is based on subadditive arguments which rely on a variational formulation of general quasilinear divergence-form equations.

  5. Infants use temporal regularities to chunk objects in memory.

    PubMed

    Kibbe, Melissa M; Feigenson, Lisa

    2016-01-01

    Infants, like adults, can maintain only a few items in working memory, but can overcome this limit by creating more efficient representations, or "chunks." Previous research shows that infants can form chunks using shared features or spatial proximity between objects. Here we asked whether infants also can create chunked representations using regularities that unfold over time. Thirteen-month old infants first were familiarized with four objects of different shapes and colors, presented in successive pairs. For some infants, the identities of objects in each pair varied randomly across familiarization (Experiment 1). For others, the objects within a pair always co-occurred, either in consistent relative spatial positions (Experiment 2a) or varying spatial positions (Experiment 2b). Following familiarization, infants saw all four objects hidden behind a screen and then saw the screen lifted to reveal either four objects or only three. Infants in Experiment 1, who had been familiarized with random object pairings, failed to look longer at the unexpected 3-object outcome; they showed the same inability to concurrently represent four objects as in other studies of infant working memory. In contrast, infants in Experiments 2a and 2b, who had been familiarized with regularly co-occurring pairs, looked longer at the unexpected outcome. These infants apparently used the co-occurrence between individual objects during familiarization to form chunked representations that were later deployed to track the objects as they were hidden at test. In Experiment 3, we confirmed that the familiarization affected infants' ability to remember the occluded objects rather than merely establishing longer-term memory for object pairs. Following familiarization to consistent pairs, infants who were not shown a hiding event (but merely saw the same test outcomes as in Experiments 2a and b) showed no preference for arrays of three versus four objects. Finally, in Experiments 4 and 5, we asked

  6. Compensatory Increase in Ovarian Aromatase in Older Regularly Cycling Women

    PubMed Central

    Shaw, N. D.; Srouji, S. S.; Welt, C. K.; Cox, K. H.; Fox, J. H.; Adams, J. A.; Sluss, P. M.

    2015-01-01

    Context: Serum estradiol (E2) levels are preserved in older reproductive-aged women with regular menstrual cycles despite declining ovarian function. Objective: The objective of the study was to determine whether increased granulosa cell aromatase expression and activity account for preservation of E2 levels in older, regularly cycling women. Design: The protocol included daily blood sampling and dominant follicle aspirations at an academic medical center during a natural menstrual cycle. Subjects: Healthy, regularly cycling older (36–45 y; n = 13) and younger (22–34 y; n = 14) women participated in the study. Main Outcome Measures: Hormone levels were measured in peripheral blood and follicular fluid aspirates and granulosa cell CYP19A1 (aromatase) and FSH-R mRNA expression were determined. Results: Older women had higher FSH levels than younger women during the early follicular phase with similar E2 but lower inhibin B and antimullerian hormone levels. Late follicular phase serum E2 did not differ between the two groups. Follicular fluid E2 [older (O) = 960.0 [interquartile range (IQR) 765.0–1419.0]; younger (Y) = 994.5 [647.3–1426.5] ng/mL, P = 1.0], estrone (O = 39.6 [29.5–54.1]; Y = 28.8 [22.5–42.1] ng/mL, P = 0.3), and the E2 to testosterone (T) ratio (O = 109.0 ± 41.9; Y = 83.0 ± 18.6, P = .50) were preserved in older women. Granulosa cell CYP19A1 expression was increased 3-fold in older compared with younger women (P < .001), with no difference in FSH-R expression. Conclusions: Ovarian aromatase expression increases with age in regularly cycling women. Thus, up-regulation of aromatase activity appears to compensate for the known age-related decrease in granulosa cell number in the dominant follicle to maintain ovarian estrogen production in older premenopausal women. PMID:26126208

  7. Ideality contours and thermodynamic regularities in supercritical molecular fluids

    NASA Astrophysics Data System (ADS)

    Desgranges, Caroline; Margo, Abigail; Delhommelle, Jerome

    2016-08-01

    Using Expanded Wang-Landau simulations, we calculate the ideality contours for 3 molecular fluids (SF6, CO2 and H2O). We analyze how the increase in polarity, and thus, in the strength of the intermolecular interactions, impacts the contours and thermodynamic regularities. This effect results in the increase in the Boyle and H parameters, that underlie the Zeno line and the curve of ideal enthalpy. Furthermore, a detailed analysis reveals that dipole-dipole interactions lead to much larger enthalpic contributions to the Gibbs free energy. This accounts for the much higher temperatures and pressures that are necessary for supercritical H2O to achieve ideal-like thermodynamic properties.

  8. Regular Wave Propagation Out of Noise in Chemical Active Media

    SciTech Connect

    Alonso, S.; Sendina-Nadal, I.; Perez-Munuzuri, V.; Sancho, J. M.; Sagues, F.

    2001-08-13

    A pacemaker, regularly emitting chemical waves, is created out of noise when an excitable photosensitive Belousov-Zhabotinsky medium, strictly unable to autonomously initiate autowaves, is forced with a spatiotemporal patterned random illumination. These experimental observations are also reproduced numerically by using a set of reaction-diffusion equations for an activator-inhibitor model, and further analytically interpreted in terms of genuine coupling effects arising from parametric fluctuations. Within the same framework we also address situations of noise-sustained propagation in subexcitable media.

  9. Regular and irregular geodesics on spherical harmonic surfaces

    NASA Astrophysics Data System (ADS)

    Waters, Thomas J.

    2012-03-01

    The behavior of geodesic curves on even seemingly simple surfaces can be surprisingly complex. In this paper we use the Hamiltonian formulation of the geodesic equations to analyze their integrability properties. In particular, we examine the behavior of geodesics on surfaces defined by the spherical harmonics. Using the Morales-Ramis theorem and Kovacic algorithm we are able to prove that the geodesic equations on all surfaces defined by the sectoral harmonics are not integrable, and we use Poincaré sections to demonstrate the breakdown of regular motion.

  10. Regular arrays of Al nanoparticles for plasmonic applications

    SciTech Connect

    Schade, Martin Bohley, Christian; Sardana, Neha; Schilling, Jörg; Fuhrmann, Bodo; Schlenker, Sven; Leipner, Hartmut S.

    2014-02-28

    Optical properties of aluminium nanoparticles deposited on glass substrates are investigated. Laser interference lithography allows a quick deposition of regular, highly periodic arrays of nanostructures with different sizes and distances in order to investigate the shift of the surface plasmon resonance for, e.g., photovoltaic, plasmonic or photonic applications. The variation of the diameter of cylindrical Al nanoparticles exhibits a nearly linear shift of the surface plasmon resonance between 400 nm and 950 nm that is independent from the polarization vector of the incident light. Furthermore, particles with quadratic or elliptic base areas are presented exhibiting more complex and polarization vector dependent transmission spectra.

  11. Adaptive regularized scheme for remote sensing image fusion

    NASA Astrophysics Data System (ADS)

    Tang, Sizhang; Shen, Chaomin; Zhang, Guixu

    2016-06-01

    We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-L1 term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.

  12. On the low regularity of the Benney-Lin equation

    NASA Astrophysics Data System (ADS)

    Chen, Wengu; Li, Junfeng

    2008-03-01

    We consider the low regularity of the Benney-Lin equation ut+uux+uxxx+[beta](uxx+uxxxx)+[eta]uxxxxx=0. We established the global well posedness for the initial value problem of Benney-Lin equation in the Sobolev spaces for 0[greater-or-equal, slanted]s>-2, improving the well-posedness result of Biagioni and Linares [H.A. Biaginoi, F. Linares, On the Benney-Lin and Kawahara equation, J. Math. Anal. Appl. 211 (1997) 131-152]. For s<-2 we also prove some ill-posedness issues.

  13. 1. PLAN OF MOXHAM, JOHNSTOWN, PENNA. ALL REGULAR LOTS 40 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. PLAN OF MOXHAM, JOHNSTOWN, PENNA. ALL REGULAR LOTS 40 FT BY 120 FT. TRACED FROM DRAWING 10742 (dated February 1, 1892). THE JOHNSON COMPANY, SCALE 1 INCH - 160 FT, SEPT. 19TH 1898. DRAWING NUMBER 29781. Original plan for the Town of Moxham drafted in 1887-88, company archives contain several revised blueprints of the original plan. This revision reflects the subdivision of the Von Lunch Grove into residential lots, but still indicates the 'Moxham Block' on which the original Moxham Estate was built in 1888-89. (Photograph of drawing held at the Johnstown Corporation General Office, Johnstown, PA) - Borough of Moxham, Johnstown, Cambria County, PA

  14. A mechanical counterexample to KAM theory with low regularity

    NASA Astrophysics Data System (ADS)

    Marò, Stefano

    2014-08-01

    We give a mechanical example concerning the fact that some regularity is necessary in KAM theory. We consider the model given by the vertical bouncing motion of a ball on a periodically moving plate. Denoting with f the motion of the plate, some variants of Moser invariant curve theorem apply if ḟ is small in norm C5 and every motion has bounded velocity. This is not possible if the function f is only C1. Indeed we construct a function f∈C1 with arbitrary small derivative in norm C0 for which a motion with unbounded velocity exists.

  15. The Behavior of Regular Satellites During the Planetary Migration

    NASA Astrophysics Data System (ADS)

    Nogueira, Erica Cristina; Gomes, R. S.; Brasser, R.

    2013-05-01

    Abstract (2,250 Maximum Characters): The behavior of the regular satellites of the giant planets during the instability phase of the Nice model needs to be better understood. In order to explain this behavior, we used numerical simulations to investigate the evolution of the regular satellite systems of the ice giants when these two planets experienced encounters with the gas giants. For the initial conditions we placed an ice planet in between Jupiter and Saturn, according to the evolution of Nice model simulations in a ‘jumping Jupiter’ scenario (Brasser et al. 2009). We used the MERCURY integrator (Chambers 1999) and cloned simulations by slightly modifying the Hybrid integrator changeover parameter. We obtained 101 successful runs which kept all planets, of which 24 were jumping Jupiter cases. Subsequently we performed additional numerical integrations in which the ice giant that encountered a gas giant was started on the same orbit but with its regular satellites included. This is done as follows: For each of the 101 basic runs, we save the orbital elements of all objects in the integration at all close encounter events. Then we performed a backward integration to start the system 100 years before the encounter and re-enacted the forward integration with the regular satellites around the ice giant. These integrations ran for 1000 years. The final orbital elements of the satellites with respect to the ice planet were used to restart the integration for the next planetary encounter (if any). If we assume that Uranus is the ice planet that had encounters with a gas giant, we considered the satellites Miranda, Ariel, Umbriel, Titania and Oberon with their present orbits around the planet. For Neptune we introduced Triton with an orbit with a 15% larger than the actual semi-major axis to account for the tidal decay from the LHB to present time. We also assume that Triton was captured through binary disruption (Agnor and Hamilton 2006, Nogueira et al. 2011) and

  16. From hyperbolic regularization to exact hydrodynamics for linearized Grad's equations.

    PubMed

    Colangeli, Matteo; Karlin, Iliya V; Kröger, Martin

    2007-05-01

    Inspired by a recent hyperbolic regularization of Burnett's hydrodynamic equations [A. Bobylev, J. Stat. Phys. 124, 371 (2006)], we introduce a method to derive hyperbolic equations of linear hydrodynamics to any desired accuracy in Knudsen number. The approach is based on a dynamic invariance principle which derives exact constitutive relations for the stress tensor and heat flux, and a transformation which renders the exact equations of hydrodynamics hyperbolic and stable. The method is described in detail for a simple kinetic model -- a 13 moment Grad system.

  17. Developmental dyslexia in a regular orthography: a single case study.

    PubMed

    Moll, Kristina; Hutzler, Florian; Wimmer, Heinz

    2005-12-01

    This study of an adult case examined in detail with eye movement measures the reading speed problem which is characteristic for developmental dyslexia in regular orthographies. A dramatic length effect was found for low frequency words and for pseudowords, but not for high frequency words. However, even for high frequency words it was found that reading times were substantially prolonged although number of fixations did not differ. A neurocognitive assessment revealed no visual deficits (parallel processing, precedence detection, coherent motion detection) but speed impairments for certain verbal and phonological processes. We propose that the reading difficulties are phonological in nature, but these difficulties become manifest as inefficiency and not as inability. PMID:16393757

  18. Trigonometric Pade approximants for functions with regularly decreasing Fourier coefficients

    SciTech Connect

    Labych, Yuliya A; Starovoitov, Alexander P

    2009-08-31

    Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a{sub 0}/2 + {sigma} a{sub n} cos kx are found which ensure that the trigonometric Pade approximants {pi}{sup t}{sub n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.

  19. Regularization approach for tomosynthesis X-ray inspection

    SciTech Connect

    Tigkos, Konstantinos; Hassler, Ulf; Holub, Wolfgang; Woerlein, Norbert; Rehak, Markus

    2014-02-18

    X-ray inspection is intended to be used as an escalation technique for inspection of carbon fiber reinforced plastics (CFRP) in aerospace applications, especially in case of unclear indications from ultrasonic or other NDT modalities. Due to their large dimensions, most aerospace components cannot be scanned by conventional computed tomography. In such cases, X-ray Laminography may be applied, allowing a pseudo 3D slice-by-slice reconstruction of the sample with Tomosynthesis. However, due to the limited angle acquisition geometry, reconstruction artifacts arise, especially at surfaces parallel to the imaging plane. To regularize the Tomosynthesis approach, we propose an additional prescan of the object to detect outer sample surfaces. We recommend the use of contrasted markers which are temporarily attached to the sample surfaces. The depth position of the markers is then derived from that prescan. As long as the sample surface remains simple, few markers are required to fit the respective object surfaces. The knowledge about this surface may then be used to regularize the final Tomosynthesis reconstruction, performed with markerless projections. Eventually, it can also serve as prior information for an ART reconstruction or to register a CAD model of the sample. The presented work is carried out within the European FP7 project QUICOM. We demonstrate the proposed approach within a simulation study applying an acquisition geometry suited for CFRP part inspection. A practical verification of the approach is planned later in the project.

  20. Statistical regularities in the rank-citation profile of scientists.

    PubMed

    Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.

  1. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  2. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-09-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  3. Regularizing the divergent structure of light-front currents

    SciTech Connect

    Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2001-04-01

    The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.

  4. Global Optimization Methods for Gravitational Lens Systems with Regularized Sources

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Fiege, Jason D.

    2012-11-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  5. Inversion of SOHO/EPHIN data using regularization techniques

    NASA Astrophysics Data System (ADS)

    Wimmer-Schweingruber, R. F.; Böhm, E.; Kharytonov, A.; Müller-Mellin, R.; Gomez-Herrero, R.; Heber, B.

    2006-12-01

    We analyze data from the Solar and Heliospheric Observatory (SOHO) instrument EPHIN (electron, proton, helium instrument) by full deconvolution of the measured data with the instrument response function. We show how regularization methods can be applied to energetic particle measurements to derive unambiguously the original particle spectrum - devoid of any assumptions made about its functional behaviour. This inversion thechnique still requires knowledge of the instrument response function, however, it is an improvement upon normal least-squares or maximum-likelihood fitting procedures because it does not require any a-priori knowlwdge of the underlying particle spectra. Given the instrument response function in matrix form (here derived using Monte Carlo techniques), the original Fredholm integral equations reduce to a discrete system of linear algebraic equations that can be solved by ordinary regularization methods such as singular value decomposition or the Tikhonov method. This procedure alone may laed to unphysical negative results, requiring the further constraint of non-negative count rates. We apply the SVD and Thikonov methods with and without constraints to measured data from SOHO/EPHIN. The derived results agree well with those of other methods that rely on a-priori knowledge of the spectral shape of the particle distribution function, demonstrating the power of the method for more general cases.

  6. Suggesting Missing Relations in Biomedical Ontologies Based on Lexical Regularities.

    PubMed

    Quesada-Martínez, Manuel; Fernández-Breis, Jesualdo Tomás; Karlsson, Daniel

    2016-01-01

    The number of biomedical ontologies has increased significantly in recent years. Many of such ontologies are the result of efforts of communities of domain experts and ontology engineers. The development and application of quality assurance (QA) methods should help these communities to develop useful ontologies for both humans and machines. According to previous studies, biomedical ontologies are rich in natural language content, but most of them are not so rich in axiomatic terms. Here, we are interested in studying the relation between content in natural language and content in axiomatic form. The analysis of the labels of the classes permits to identify lexical regularities (LRs), which are sets of words that are shared by labels of different classes. Our assumption is that the classes exhibiting an LR should be logically related through axioms, which is used to propose an algorithm to detect missing relations in the ontology. Here, we analyse a lexical regularity of SNOMED CT, congenital stenosis, which is reported as problematic by the SNOMED CT maintenance team. PMID:27577409

  7. Destroying the event horizon of regular black holes

    NASA Astrophysics Data System (ADS)

    Li, Zilong; Bambi, Cosimo

    2013-06-01

    Recently, several authors have studied the possibility of overspinning or overcharging an existing black hole to destroy its event horizon and make the central singularity naked. When all the effects are properly taken into account, any attempt to destroy the black hole seems to be doomed to fail, in agreement with the weak cosmic censorship conjecture. In this article, we study the possibility of destroying the event horizon of regular black holes. These objects have no central singularity and therefore they are not protected by the cosmic censorship hypothesis. Our results strongly support the conclusion that regular black holes can be destroyed. If we believe that the central singularity in astrophysical black holes is solved by quantum gravity effects, we might have a chance to see the black hole’s internal region and observe quantum gravity phenomena. As our finding implies the violation of the black hole’s area theorem, the collision of two black holes may release an amount of energy exceeding the Hawking bound, which can be experimentally tested by gravitational wave detectors.

  8. Personalized microbial network inference via co-regularized spectral clustering.

    PubMed

    Imangaliyev, Sultan; Keijser, Bart; Crielaard, Wim; Tsivtsivadze, Evgeni

    2015-07-15

    We use Human Microbiome Project (HMP) cohort (Peterson et al., 2009) to infer personalized oral microbial networks of healthy individuals. To determine clustering of individuals with similar microbial profiles, co-regularized spectral clustering algorithm is applied to the dataset. For each cluster we discovered, we compute co-occurrence relationships among the microbial species that determine microbial network per cluster of individuals. The results of our study suggest that there are several differences in microbial interactions on personalized network level in healthy oral samples acquired from various niches. Based on the results of co-regularized spectral clustering we discover two groups of individuals with different topology of their microbial interaction network. The results of microbial network inference suggest that niche-wise interactions are different in these two groups. Our study shows that healthy individuals have different microbial clusters according to their oral microbiota. Such personalized microbial networks open a better understanding of the microbial ecology of healthy oral cavities and new possibilities for future targeted medication. The scripts written in scientific Python and in Matlab, which were used for network visualization, are provided for download on the website http://learning-machines.com/. PMID:25842007

  9. Theory of volume transition in polyelectrolyte gels with charge regularization.

    PubMed

    Hua, Jing; Mitra, Mithun K; Muthukumar, M

    2012-04-01

    We present a theory for polyelectrolyte gels that allow the effective charge of the polymer backbone to self-regulate. Using a variational approach, we obtain an expression for the free energy of gels that accounts for the gel elasticity, free energy of mixing, counterion adsorption, local dielectric constant, electrostatic interaction among polymer segments, electrolyte ion correlations, and self-consistent charge regularization on the polymer strands. This free energy is then minimized to predict the behavior of the system as characterized by the gel volume fraction as a function of external variables such as temperature and salt concentration. We present results for the volume transition of polyelectrolyte gels in salt-free solvents, solvents with monovalent salts, and solvents with divalent salts. The results of our theoretical analysis capture the essential features of existing experimental results and also provide predictions for further experimentation. Our analysis highlights the importance of the self-regularization of the effective charge for the volume transition of gels in particular, and for charged polymer systems in general. Our analysis also enables us to identify the dominant free energy contributions for charged polymer networks and provides a framework for further investigation of specific experimental systems.

  10. Choice of regularization weight in basis pursuit reflectivity inversion

    NASA Astrophysics Data System (ADS)

    Sen, Mrinal K.; Biswas, Reetam

    2015-02-01

    Seismic inverse problem of estimating P- and S-wave reflectivity from seismic traces has recently been revisited using a basis pursuit denoising inversion (BPI) approach. The BPI uses a wedge dictionary to define model constraints, which has been successful in resolving thin beds. Here we address two fundamental problems associated with BPI, namely, the uniqueness of the estimate and the choice of regularization weight λ to be used in the model norm. We investigated these using very fast simulated re-annealing (VFSR) and gradient projection sparse reconstruction (GPSR) approaches. For a synthetic model with two reflectors separated by one time sample, we are able to demonstrate convergence of VFSR to the true model with different random starting models. Two numerical approaches to estimating the regularization weight were investigated. One uses λ as a hyper-parameter and the other uses this as a temperature-like annealing parameter. In both cases, we were able to obtain λ fairly rapidly. Finally, an analytic formula for λ that is iteration adaptive was also implemented. Successful applications of our approach to synthetic and field data demonstrate validity and robustness.

  11. Filter ensemble regularized common spatial pattern for EEG classification

    NASA Astrophysics Data System (ADS)

    Su, Yuxi; Li, Yali; Wang, Shengjin

    2015-07-01

    Common Spatial Pattern (CSP) is one of the most effective feature extraction algorithm for Brain-Computer Interfaces (BCI). Despite its advantages of wide versatility and high efficiency, CSP is shown to be non-robust to noise and prone to over fitting when training sample number is limited. In order to overcome these problems, Regularized Common Spatial Pattern (RCSP) is further proposed. RCSP regularized covariance matrix estimation by two parameters, which reduces the estimation difference and improves the stationarity under small sample condition. However, RCSP does not make full use of the frequency information. In this paper, we presents a filter ensemble technique for RCSP (FERCSP) to further extract frequency information and aggregate all the RCSPs efficiently to get an ensemble-based solution. The performance of the proposed algorithm is evaluated on data set IVa of BCI Competition III against other five RCSPbased algorithms. The experimental results show that FERCSP significantly outperforms those of the existing methods in classification accuracy. The FERCSP outperforms the CSP algorithm and R-CSP-A algorithm in all five subjects with an average improvement of 6% in accuracy.

  12. Autocorrelation and regularization in digital images. I - Basic theory

    NASA Technical Reports Server (NTRS)

    Jupp, David L. B.; Strahler, Alan H.; Woodcock, Curtis E.

    1988-01-01

    Spatial structure occurs in remotely sensed images when the imaged scenes contain discrete objects that are identifiable in that their spectral properties are more homogeneous within than between them and other scene elements. The spatial structure introduced is manifest in statistical measures such as the autocovariance function and variogram associated with the scene, and it is possible to formulate these measures explicitly for scenes composed of simple objects of regular shapes. Digital images result from sensing scenes by an instrument with an associated point spread function (PSF). Since there is averaging over the PSF, the effect, termed regularization, induced in the image data by the instrument will influence the observable autocovariance and variogram functions of the image data. It is shown how the autocovariance or variogram of an image is a composition of the underlying scene covariance convolved with an overlap function, which is itself a convolution of the PSF. The functional form of this relationship provides an analytic basis for scene inference and eventual inversion of scene model parameters from image data.

  13. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  14. Regularization of parallel MRI reconstruction using in vivo coil sensitivities

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Otazo, Ricardo; Xu, Jian; Sodickson, Daniel K.

    2009-02-01

    Parallel MRI can achieve increased spatiotemporal resolution in MRI by simultaneously sampling reduced k-space data with multiple receiver coils. One requirement that different parallel MRI techniques have in common is the need to determine spatial sensitivity information for the coil array. This is often done by smoothing the raw sensitivities obtained from low-resolution calibration images, for example via polynomial fitting. However, this sensitivity post-processing can be both time-consuming and error-prone. Another important factor in Parallel MRI is noise amplification in the reconstruction, which is due to non-unity transformations in the image reconstruction associated with spatially correlated coil sensitivity profiles. Generally, regularization approaches, such as Tikhonov and SVD-based methods, are applied to reduce SNR loss, at the price of introducing residual aliasing. In this work, we present a regularization approach using in vivo coil sensitivities in parallel MRI to overcome these potential errors into the reconstruction. The mathematical background of the proposed method is explained, and the technique is demonstrated with phantom images. The effectiveness of the proposed method is then illustrated clinically in a whole-heart 3D cardiac MR acquisition within a single breath-hold. The proposed method can not only overcome the sensitivity calibration problem, but also suppress a substantial portion of reconstruction-related noise without noticeable introduction of residual aliasing artifacts.

  15. Regular patterns in subglacial bedforms demonstrate emergent field behaviour

    NASA Astrophysics Data System (ADS)

    Clark, Chris; Ely, Jeremy; Spagnolo, Matteo; Hahn, Ute; Stokes, Chris; Hughes, Anna

    2016-04-01

    Somewhat counter-intuitively, ice-sheets abhor flat beds when flowing over soft sedimentary substrates. Instead, they produce an undulated surface, metres in relief and with length-scales of hundreds of metres. The resistive stresses that such bumps impart on ice flow affect the functioning of ice sheets by slowing ice transfer to lower elevations for melting and calving. The most abundant roughness elements are drumlins, streamlined in the direction of ice flow. Understanding their formation has eluded scientific explanation for almost two centuries with the literature seeking mechanistic explanations for individual bumps. Here we analyse tens of thousands of drumlins and find that they possess a strong regularity in their spatial positioning, which requires interactions between drumlins during their formation. This demonstrates a pattern-forming behaviour that requires explanation at the scale of drumlinised landscapes, beyond that of individual drumlins. Such regularity is expected to arise from interdependence between ice flow, sediment flux and the shape of the bed, with drumlins representing a specific emergent property of these interactions. That bed roughness is found to organise itself into specific, predictable and patterned length-scales might assist next generation of 'sliding laws' that incorporate ice-bed interactions, thereby improving modelling of ice-sheet flow.

  16. Talking Physics to Regular People: The Why and the How

    NASA Astrophysics Data System (ADS)

    Perkowitz, Sidney

    2013-04-01

    The huge popular interest in the Higgs boson shows that non-physicists can be fascinated by the ideas of physics, even highly abstract ones. That's one good reason to talk physics to ``regular people.'' A second important reason is that society supports physics and in return, deserves to know what physicists are doing. Another is the need to engage young people who may become physicists. Yet another is that when we translate our work so anyone can grasp it, we ourselves better understand it and what it means outside the lab. Especially in today's climate where funding for science, and science itself, are under threat, it's essential that regular people know us, what we do, and why it is important. That's the ``why'' of talking physics. To discuss the ``how,'' I'll draw on my long and extensive experience in presenting physics, technology and science to non-scientists through books and articles, blogs, videos, lectures, stage and museum works, and media appearances (see http://sidneyperkowitz.net). I'll offer ideas about talking physics to different groups, at different levels, and for different purposes, and about how to use such outreach to enrich your own career in physics while helping the physics community.

  17. Personalized microbial network inference via co-regularized spectral clustering.

    PubMed

    Imangaliyev, Sultan; Keijser, Bart; Crielaard, Wim; Tsivtsivadze, Evgeni

    2015-07-15

    We use Human Microbiome Project (HMP) cohort (Peterson et al., 2009) to infer personalized oral microbial networks of healthy individuals. To determine clustering of individuals with similar microbial profiles, co-regularized spectral clustering algorithm is applied to the dataset. For each cluster we discovered, we compute co-occurrence relationships among the microbial species that determine microbial network per cluster of individuals. The results of our study suggest that there are several differences in microbial interactions on personalized network level in healthy oral samples acquired from various niches. Based on the results of co-regularized spectral clustering we discover two groups of individuals with different topology of their microbial interaction network. The results of microbial network inference suggest that niche-wise interactions are different in these two groups. Our study shows that healthy individuals have different microbial clusters according to their oral microbiota. Such personalized microbial networks open a better understanding of the microbial ecology of healthy oral cavities and new possibilities for future targeted medication. The scripts written in scientific Python and in Matlab, which were used for network visualization, are provided for download on the website http://learning-machines.com/.

  18. Regularization approach for tomosynthesis X-ray inspection

    NASA Astrophysics Data System (ADS)

    Tigkos, Konstantinos; Hassler, Ulf; Holub, Wolfgang; Woerlein, Norbert; Rehak, Markus

    2014-02-01

    X-ray inspection is intended to be used as an escalation technique for inspection of carbon fiber reinforced plastics (CFRP) in aerospace applications, especially in case of unclear indications from ultrasonic or other NDT modalities. Due to their large dimensions, most aerospace components cannot be scanned by conventional computed tomography. In such cases, X-ray Laminography may be applied, allowing a pseudo 3D slice-by-slice reconstruction of the sample with Tomosynthesis. However, due to the limited angle acquisition geometry, reconstruction artifacts arise, especially at surfaces parallel to the imaging plane. To regularize the Tomosynthesis approach, we propose an additional prescan of the object to detect outer sample surfaces. We recommend the use of contrasted markers which are temporarily attached to the sample surfaces. The depth position of the markers is then derived from that prescan. As long as the sample surface remains simple, few markers are required to fit the respective object surfaces. The knowledge about this surface may then be used to regularize the final Tomosynthesis reconstruction, performed with markerless projections. Eventually, it can also serve as prior information for an ART reconstruction or to register a CAD model of the sample. The presented work is carried out within the European FP7 project QUICOM. We demonstrate the proposed approach within a simulation study applying an acquisition geometry suited for CFRP part inspection. A practical verification of the approach is planned later in the project.

  19. Empirical regularities of opening call auction in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Ren, Fei; Ni, Xiao-Hui; Chen, Wei; Zhou, Wei-Xing

    2010-01-01

    We study the statistical regularities of an opening call auction using the ultra-high-frequency data of 22 liquid stocks traded on the Shenzhen Stock Exchange in 2003. The distribution of the relative price, defined as the relative difference between the order price in the opening call auction and the closing price on the last trading day, is asymmetric and that the distribution displays a sharp peak at the zero relative price and a relatively wide peak at the negative relative price. The detrended fluctuation analysis (DFA) method is adopted to investigate the long-term memory of relative order prices. We further study the statistical regularities of order sizes in the opening call auction, and observe a phenomenon of number preference, known as order size clustering. The probability density function (PDF) of order sizes could be well fitted by a q-Gamma function, and the long-term memory also exists in order sizes. In addition, both the average volume and the average number of orders decrease exponentially with the price level away from the best bid or ask price level in the limit-order book (LOB) established immediately after the opening call auction, and a price clustering phenomenon is observed.

  20. Characteristics of density currents over regular and irregular rough surfaces

    NASA Astrophysics Data System (ADS)

    Bhaganagar, K.

    2013-12-01

    Direct numerical simulation is used as a tool to understand the effect of surface roughness on the propagation of density currents. Simulations have been performed for lock-exchange flow with gate separating the dense and the lighter fluid. As the lock is released the dense fluid collapses with the lighter fluid on the top, resulting in formation of horizontally evolving density current. The talk will focus on the fundamental differences between the propagation of the density current over regular and irregular rough surfaces. The flow statistics and the flow structures are discussed. The results have revealed the spacing between the roughness elements is an important factor in classifying the density currents. The empirical relations of the front velocity and location for the dense and sparse roughness have been evaluated in terms of the roughness height, spacing between the elements and the initial amount of lock fluid. DNS results for a dense current flowing over a (a) smooth and (b) rough bottom with egg-carton roughness elements in a regular configuration. In these simulations the lock-exchange box is located in the middle of the channel and has two gates which allow two dense currents to be generated, one moving to the right and one to the left side of the channel. Note how the dense current interface presents smaller structures when over a rough bottom (right).

  1. Effect of Regular Exercise Program on Depression in Hemodialysis Patients

    PubMed Central

    Rezaei, Jahangir; Abdi, Alireza; Rezaei, Mansour; Heydarnezhadian, Jafar; Jalali, Rostam

    2015-01-01

    Background and Aim. Depression is the most common psychological disorder in hemodialysis patients which decreases their quality of life and increases the mortality. This study was conducted to assess the effect of regular exercise on depression in hemodialysis patients. Methods. In a randomized clinical trial, 51 hemodialysis patients were allocated in two groups. Beck Depression Inventory (BDI) scale was used to assessing depression rate in participants. Designed program was educated using poster and face-to-face methods for case group. Intervention was carried out three times a week for ten weeks. At the beginning and the end of the study, depression rate of the subjects was assessed. Data was analyzed by SPSS16 software and descriptive and inferential statistics. Findings. According to the results of this study, there were no differences between case and control groups in depression rate at the beginning of the study, but there was significant difference after intervention (P = 0.016). In the beginning of the study, the mean and SD of depression in case group were 23.8 ± 9.29 and reduced to 11.07 ± 12.64 at the end (P < 0.001). Conclusion. The regular exercise program could reduce the depression in hemodialysis patients; therefore it is suggested for training this program for hemodialysis patients. This trial is registered with Iranian Registry of Clinical Trial (IRCT) number IRCT201205159763N1. PMID:27347502

  2. General theory of regular biorthogonal pairs and its physical operators

    NASA Astrophysics Data System (ADS)

    Inoue, H.

    2016-08-01

    In this paper, we introduce a general theory of regular biorthogonal sequences and its physical operators. Biorthogonal sequences {ϕn} and {ψn} in a Hilbert space H are said to be regular if Span {ϕn} and Span {ψn} are dense in H . The first purpose is to show that there exists a non-singular positive self-adjoint operator Tf in H defined by an orthonormal basis (ONB) f ≡ {fn} in H such that ϕn = Tffn and ψ n = Tf - 1 f n , n = 0, 1, …, and such an ONB f is unique. The second purpose is to define and study the lowering operators Af and Bf † , the raising operators Bf and Af † , and the number operators Nf and Nf † determined by the non-singular positive self-adjoint operator Tf. These operators connect with quasi-Hermitian quantum mechanics and its relatives. This paper clarifies and simplifies the mathematical structure of this framework and minimizes the required assumptions.

  3. Mesoscopic Higher Regularity and Subadditivity in Elliptic Homogenization

    NASA Astrophysics Data System (ADS)

    Armstrong, Scott; Kuusi, Tuomo; Mourrat, Jean-Christophe

    2016-10-01

    We introduce a new method for obtaining quantitative results in stochastic homogenization for linear elliptic equations in divergence form. Unlike previous works on the topic, our method does not use concentration inequalities (such as Poincaré or logarithmic Sobolev inequalities in the probability space) and relies instead on a higher ( C k , k ≥ 1) regularity theory for solutions of the heterogeneous equation, which is valid on length scales larger than a certain specified mesoscopic scale. This regularity theory, which is of independent interest, allows us to, in effect, localize the dependence of the solutions on the coefficients and thereby accelerate the rate of convergence of the expected energy of the cell problem by a bootstrap argument. The fluctuations of the energy are then tightly controlled using subadditivity. The convergence of the energy gives control of the scaling of the spatial averages of gradients and fluxes (that is, it quantifies the weak convergence of these quantities), which yields, by a new "multiscale" Poincaré inequality, quantitative estimates on the sublinearity of the corrector.

  4. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.

  5. Regularities in responding during performance of a complex choice task.

    PubMed

    Mercado, Eduardo; Orduña, Vladimir

    2015-12-01

    Systematic variations in the rate and temporal patterns of responding under a multiple concurrent-chains schedule were quantified using recurrence metrics and self-organizing maps to assess whether individual rats showed consistent or idiosyncratic patterns. The results indicated that (1) the temporal regularity of response patterns varied as a function of number of training sessions, time on task, magnitude of reinforcement, and reinforcement contingencies; (2) individuals showed heterogeneous, stereotyped patterns of responding, despite similarities in matching behavior; (3) the specific trajectories of behavioral variation shown by individuals were less evident in group-level analyses; and (4) reinforcement contingencies within terminal links strongly modulated response patterns within initial links. Temporal regularity in responding was most evident for responses that led to minimally delayed reinforcers of larger magnitude. Models of response production and selection that take into account the time between individual responses, probabilities of transitions between response options, periodicity within response sequences, and individual differences in response dynamics can clarify the mechanisms that drive behavioral adjustments during operant conditioning. PMID:26077440

  6. [Group psychotherapy of neuroses and personality disorders in regular soldiers].

    PubMed

    Araszkiewicz, A; Florkowski, A; Lucki, Z

    1994-01-01

    Environmental conditions cause neuroses and symptoms of personality disorders in regular soldiers. Military service in highly formalized and hierarchical conditions makes it impossible to: express emotions (particularly negative ones), to arrange one's own time, to choose the position and place of work. Another important psychotraumatic factor is excessive load of work and responsibility for the sake of "the service". Psychotherapy is the main part of neurotic and personality disorder therapy in regular soldiers. The social context is the bass for theoretical assumptions of psychotherapy carried out by the authors. Based on the theory of learning, the aims of the applied psychotherapy are: eagerness for the elimination of symptoms and changing the mode of behaviour. Group psychotherapy is carried out in stationary conditions, in groups of 8 to 13 patients, for 8-9 weeks. The applied methods are: debating psychotherapy, interaction-communicative methods, psychodrawing, musicotherapy, choreotherapy and relaxation techniques. As the result of the therapy, about 89% of symptomatic improvement and about 81% of the change of attitude and behaviour were obtained.

  7. The ROI CT problem: a shearlet-based regularization approach

    NASA Astrophysics Data System (ADS)

    Bubba, T. A.; Porta, F.; Zanghirati, G.; Bonettini, S.

    2016-10-01

    The possibility to significantly reduce the X-ray radiation dose and shorten the scanning time is particularly appealing, especially for the medical imaging community. Region- of-interest Computed Tomography (ROI CT) has this potential and, for this reason, is currently receiving increasing attention. Due to the truncation of projection images, ROI CT is a rather challenging problem. Indeed, the ROI reconstruction problem is severely ill-posed in general and naive local reconstruction algorithms tend to be very unstable. To obtain a stable and reliable reconstruction, under suitable noise circumstances, we formulate the ROI CT problem as a convex optimization problem with a regularization term based on shearlets, and possibly nonsmooth. For the solution, we propose and analyze an iterative approach based on the variable metric inexact line-search algorithm (VMILA). The reconstruction performance of VMILA is compared against different regularization conditions, in the case of fan-beam CT simulated data. The numerical tests show that our approach is insensitive to the location of the ROI and remains very stable also when the ROI size is rather small.

  8. Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.

    PubMed

    Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C

    2015-03-01

    The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. PMID:25541494

  9. Topology-based hexahedral regular meshing for wave propagation

    NASA Astrophysics Data System (ADS)

    Fousse, Allan; Bertrand, Yves; Rodrigues, Dominique

    2000-10-01

    Numeric simulations allow the study of physical phenomenon that are impossible or difficult to realize in the real world. As an example, it is not conceivable to cause an atomic explosion or an earthquake for exploring the effects on a building or a flood barrier. To be realistic, this kind of simulation (waves propagation), must take into account all the characteristics of the domain where it takes place, and more particularly the tri-dimensional aspect. Therefore, numericians need not only a three-dimensional model of the domain, but also a meshing of this domain. In the case we use finite differences based methods, this meshing must be hexahedral and regular. Moreover, new developments on the numerical propagation code provides tools for using meshes that interpolate the interior subdivisions of the domain. However, the manual generation of this kind of meshing is a long and difficult process. This is why to improve and simplify this work, we propose a semi-automatic algorithm based on a block subdivision. It makes use of the dissociation between the geometrical and topological aspects. Indeed, with our topological model a regular hexahedral meshing is far easier to generate. This meshing geometry can be supplied by a geometric model, with reconstruction, interpolation or parameterization methods, but it is anyway completely guided by the topological model. The result is a software presently used by the Commissariat a` l'Energie Atomique in several full-size studies, and notably for the framework of the Comprehensive Test Ban Treaty.

  10. FPGA-accelerated algorithm for the regular expression matching system

    NASA Astrophysics Data System (ADS)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  11. Genus Ranges of 4-Regular Rigid Vertex Graphs

    PubMed Central

    Buck, Dorothy; Dolzhenko, Egor; Jonoska, Nataša; Saito, Masahico; Valencia, Karin

    2016-01-01

    A rigid vertex of a graph is one that has a prescribed cyclic order of its incident edges. We study orientable genus ranges of 4-regular rigid vertex graphs. The (orientable) genus range is a set of genera values over all orientable surfaces into which a graph is embedded cellularly, and the embeddings of rigid vertex graphs are required to preserve the prescribed cyclic order of incident edges at every vertex. The genus ranges of 4-regular rigid vertex graphs are sets of consecutive integers, and we address two questions: which intervals of integers appear as genus ranges of such graphs, and what types of graphs realize a given genus range. For graphs with 2n vertices (n > 1), we prove that all intervals [a, b] for all a < b ≤ n, and singletons [h, h] for some h ≤ n, are realized as genus ranges. For graphs with 2n − 1 vertices (n ≥ 1), we prove that all intervals [a, b] for all a < b ≤ n except [0, n], and [h, h] for some h ≤ n, are realized as genus ranges. We also provide constructions of graphs that realize these ranges.

  12. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  13. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Disability for any regular employment... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child...

  14. On the Distinction between Regular and Irregular Inflectional Morphology: Evidence from Dinka

    ERIC Educational Resources Information Center

    Ladd, D. Robert; Remijsen, Bert; Manyang, Caguor Adong

    2009-01-01

    Discussions of the psycholinguistic significance of regularity in inflectional morphology generally deal with languages in which regular forms can be clearly identified and revolve around whether there are distinct processing mechanisms for regular and irregular forms. We present a detailed description of Dinka's notoriously irregular noun number…

  15. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Disability for any regular employment... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child...

  16. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Disability for any regular employment, defined... RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is...

  17. El Maestro de Sala Regular de Clases Ante el Proceso de Inclusion del Nino Con Impedimento

    ERIC Educational Resources Information Center

    Rosa Morales, Awilda

    2012-01-01

    The purpose of this research was to describe the experiences of regular class elementary school teachers with the Puerto Rico Department of Education who have worked with handicapped children who have been integrated to the regular classroom. Five elementary level regular class teachers were selected in the northwest zone of Puerto Rico who during…

  18. 34 CFR 686.5 - Enrollment status for students taking regular and correspondence courses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Enrollment status for students taking regular and... Enrollment status for students taking regular and correspondence courses. (a) If, in addition to regular... be included in determining the student's enrollment status to the extent permitted under paragraph...

  19. Integration/Inclusion Needs Assessment: Providing Education for Everyone in Regular Schools (PEERS). Revised Edition.

    ERIC Educational Resources Information Center

    Halvorsen, Ann T.; And Others

    This needs assessment instrument was developed as part of the PEERS (Providing Education for Everyone in Regular Schools) Project, a California project to integrate students with severe disabilities who were previously at special centers into services at regular school sites and students who were in special classes in regular schools into general…

  20. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    SciTech Connect

    Lin, Youzuo; Huang, Lianjie; Zhang, Zhigang

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  1. Project S.E.R.T. - Special Education for Regular Teachers.

    ERIC Educational Resources Information Center

    Hale, Steve; And Others

    Evaluated in two field tests with 50 regular teachers was a set of eight instructional modules designed to develop the competencies of regular teachers involved in mainstreaming handicapped children as part of Project SERT (Special Education for Regular Teachers). The following modules were developed: comprehensive special education, formal…

  2. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    PubMed

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering.

  3. Constructing a logical, regular axis topology from an irregular topology

    DOEpatents

    Faraj, Daniel A.

    2014-07-01

    Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.

  4. Constructing a logical, regular axis topology from an irregular topology

    DOEpatents

    Faraj, Daniel A.

    2014-07-22

    Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.

  5. Bias correction for magnetic resonance images via joint entropy regularization.

    PubMed

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  6. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  7. General Structure of Regularization Procedures in Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Titterington, D. M.

    1985-03-01

    Regularization procedures are portrayed as compromises between the conflicting aims of fidelity with the observed image and perfect smoothness. The selection of an estimated image involves the choice of a prescription, indicating the manner of smoothing, and of a smoothing parameter, which defines the degree of smoothing. Prescriptions of the minimum-penalized- distance type are considered and are shown to be equivalent to maximum-penalized-smoothness prescriptions. These include, therefore, constrained least-squares and constrained maximum entropy methods. The formal link with Bayesian statistical analysis is pointed out. Two important methods of choosing the degree of smoothing are described, one based on criteria of consistency with the data and one based on minimizing a risk function. The latter includes minimum mean-squared error criteria. Although the maximum entropy method has some practical advantages, there seems no case for it to hold a special place on philosophical grounds, in the context of image reconstruction.

  8. Weak Gravitational Lensing from Regular Bardeen Black Holes

    NASA Astrophysics Data System (ADS)

    Ghaffarnejad, Hossein; niad, Hassan

    2016-03-01

    In this article we study weak gravitational lensing of regular Bardeen black hole which has scalar charge g and mass m. We investigate the angular position and magnification of non-relativistic images in two cases depending on the presence or absence of photon sphere. Defining dimensionless charge parameter q= {g}/{2m} we seek to disappear photon sphere in the case of |q|>{24√5}/{125} for which the space time metric encounters strongly with naked singularities. We specify the basic parameters of lensing in terms of scalar charge by using the perturbative method and found that the parity of images is different in two cases: (a) The strongly naked singularities is present in the space time. (b) singularity of space time is weak or is eliminated (the black hole lens).

  9. Restriction enzyme cutting site distribution regularity for DNA looping technology.

    PubMed

    Shang, Ying; Zhang, Nan; Zhu, Pengyu; Luo, Yunbo; Huang, Kunlun; Tian, Wenying; Xu, Wentao

    2014-01-25

    The restriction enzyme cutting site distribution regularity and looping conditions were studied systematically. We obtained the restriction enzyme cutting site distributions of 13 commonly used restriction enzymes in 5 model organism genomes through two novel self-compiled software programs. All of the average distances between two adjacent restriction sites fell sharply with increasing statistic intervals, and most fragments were 0-499 bp. A shorter DNA fragment resulted in a lower looping rate, which was also directly proportional to the DNA concentration. When the length was more than 500 bp, the concentration did not affect the looping rate. Therefore, the best known fragment length was longer than 500 bp, and did not contain the restriction enzyme cutting sites which would be used for digestion. In order to make the looping efficiencies reach nearly 100%, 4-5 single cohesive end systems were recommended to digest the genome separately.

  10. Surface tension regularizes the crack singularity of adhesion.

    PubMed

    Karpitschka, Stefan; van Wijngaarden, Leen; Snoeijer, Jacco H

    2016-05-11

    The elastic and adhesive properties of a solid surface can be quantified by indenting it with a rigid sphere. Indentation tests are classically described by the JKR-law when the solid is very stiff, while recent work highlights the importance of surface tension for exceedingly soft materials. Here we show that surface tension plays a crucial role even in stiff solids: Young's wetting angle emerges as a boundary condition and this regularizes the crack-like singularity at the edge of adhesive contacts. We find that the edge region exhibits a universal, self-similar structure that emerges from the balance of surface tension and elasticity. The similarity theory is solved analytically and provides a complete description of adhesive contacts, by which we reconcile global adhesion laws and local contact mechanics. PMID:27087459

  11. Regularity underlies erratic population abundances in marine ecosystems

    PubMed Central

    Sun, Jie; Cornelius, Sean P.; Janssen, John; Gray, Kimberly A.; Motter, Adilson E.

    2015-01-01

    The abundance of a species' population in an ecosystem is rarely stationary, often exhibiting large fluctuations over time. Using historical data on marine species, we show that the year-to-year fluctuations of population growth rate obey a well-defined double-exponential (Laplace) distribution. This striking regularity allows us to devise a stochastic model despite seemingly irregular variations in population abundances. The model identifies the effect of reduced growth at low population density as a key factor missed in current approaches of population variability analysis and without which extinction risks are severely underestimated. The model also allows us to separate the effect of demographic stochasticity and show that single-species growth rates are dominantly determined by stochasticity common to all species. This dominance—and the implications it has for interspecies correlations, including co-extinctions—emphasizes the need for ecosystem-level management approaches to reduce the extinction risk of the individual species themselves. PMID:25972438

  12. Diffuse light tomography to detect blood vessels using Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Kazanci, Huseyin O.; Jacques, Steven L.

    2016-04-01

    Detection of blood vessels within light-scattering tissues involves detection of subtle shadows as blood absorbs light. These shadows are diffuse but measurable by a set of source-detector pairs in a spatial array of sources and detectors on the tissue surface. The measured shadows can reconstruct the internal position(s) of blood vessels. The tomographic method involves a set of Ns sources and Nd detectors such that Nsd = Ns x Nd source-detector pairs produce Nsd measurements, each interrogating the tissue with a unique perspective, i.e., a unique region of sensitivity to voxels within the tissue. This tutorial report describes the reconstruction of the image of a blood vessel within a soft tissue based on such source-detector measurements, by solving a matrix equation using Tikhonov regularization. This is not a novel contribution, but rather a simple introduction to a well-known method, demonstrating its use in mapping blood perfusion.

  13. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D ‑ d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D ‑ 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  14. Children's implicit learning of graphotactic and morphological regularities.

    PubMed

    Pacton, Sébastien; Fayol, Michel; Perruchet, Pierre

    2005-01-01

    In French, the transcription of the same sound can be guided by both probabilistic graphotactic constraints (e.g., /epsilon t/ is more often transcribed ette after -v than after -f) and morphological constraints (e.g., /epsilon t/ is always transcribed ette when used as a diminutive suffix). Three experiments showed that pseudo-word spellings of 8-to 11-year-old children and adults were influenced by both types of constraints. The influence of graphotactic regularities persisted when reliance on morphological rules was possible, without any falling off as a function of age. This suggests that rules are not abstracted, even after massive amounts of exposure to a rule-based material. These results can be accounted for by a statistical model of implicit learning. PMID:15784085

  15. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  16. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    PubMed

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering. PMID:27171983

  17. Effect of regular exercise on health and disease.

    PubMed

    Karacabey, Kursat

    2005-10-01

    It is known for a long time that exercise increases physical adequacy, has beneficial effects on the general health condition as well as a playing preventing role against various disease states. To decrease the risk of disease and maintain good health, the natural defense system of the organism needs to be strengthened. It is thought that in addition to increasing the body's resistance to disease through the strengthening of the immune system, decreases the convalescence time, increases work efficiency and improves the sportive performance of the individual all which would contribute positively to the national economy. The positive effects of regular exercising of aerobic nature such as strengthening of the immune system, protection against diseases as well as its positive effects on quality of life will help to emphasize the importance of physical exercise and improve the general view of sports by society. PMID:16264392

  18. Empirical regularities of order placement in the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Chen, Wei; Zhou, Wei-Xing

    2008-05-01

    Using ultra-high-frequency data extracted from the order flows of 23 stocks traded on the Shenzhen Stock Exchange, we study the empirical regularities of order placement in the opening call auction, cool period and continuous auction. The distributions of relative logarithmic prices against reference prices in the three time periods are qualitatively the same with quantitative discrepancies. The order placement behavior is asymmetric between buyers and sellers and between the inside-the-book orders and outside-the-book orders. In addition, the conditional distributions of relative prices in the continuous auction are independent of the bid-ask spread and volatility. These findings are crucial to build an empirical behavioral microscopic model based on order flows for Chinese stocks.

  19. Numerical optimization method for packing regular convex polygons

    NASA Astrophysics Data System (ADS)

    Galiev, Sh. I.; Lisafina, M. S.

    2016-08-01

    An algorithm is presented for the approximate solution of the problem of packing regular convex polygons in a given closed bounded domain G so as to maximize the total area of the packed figures. On G a grid is constructed whose nodes generate a finite set W on G, and the centers of the figures to be packed can be placed only at some points of W. The problem of packing these figures with centers in W is reduced to a 0-1 linear programming problem. A two-stage algorithm for solving the resulting problems is proposed. The algorithm finds packings of the indicated figures in an arbitrary closed bounded domain on the plane. Numerical results are presented that demonstrate the effectiveness of the method.

  20. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D - d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D - 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  1. Correlation applied to the recognition of regular geometric figures

    NASA Astrophysics Data System (ADS)

    Lasso, William; Morales, Yaileth; Vega, Fabio; Díaz, Leonardo; Flórez, Daniel; Torres, Cesar

    2013-11-01

    It developed a system capable of recognizing of regular geometric figures, the images are taken by the software automatically through a process of validating the presence of figure to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified using sonorous words referring to the name of the figure identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a spy smart glasses with usb interface offering an system equally optimal but much more economical. This tool may be useful as a possible application for visually impaired people can get information of surrounding environment.

  2. Local graph regularized coding for salient object detection

    NASA Astrophysics Data System (ADS)

    Huo, Lina; Yang, Shuyuan; Jiao, Licheng; Wang, Shuang; Shi, Jiao

    2016-07-01

    Subspace segmentation based salient object detection has received increasing interests in recent years. To preserve the locality and similarity of regions, a grouping effect of representation is introduced to segment the salient object and background in subspace. Then a new saliency map is calculated by incorporating this local graph regularizer into coding, which explicitly explores the data self-representation model and thus locate more accurate salient regions. Moreover, a heuristic object-based dictionary from background superpixels is obtained in border set removing the image regions within the potential object regions. Experimental results on four large benchmark databases demonstrate that the proposed method performs favorably against eight recent state-of-the-art methods in terms of three evaluation criterions, with a reduction of MAE by 19.8% than GR and 29.3% than CB in the two SED datasets, respectively. Meanwhile, our method also runs faster than the comparative detection approaches.

  3. Superior Regularity in Erosion Patterns by Planar Subsurface Channeling

    SciTech Connect

    Redinger, Alex; Hansen, Henri; Michely, Thomas; Linke, Udo; Rosandi, Yudi; Urbassek, Herbert M.

    2006-03-17

    The onset of pattern formation through exposure of Pt(111) with 5 keV Ar{sup +} ions at grazing incidence has been studied at 550 K by scanning tunneling microscopy and is supplemented by molecular-dynamics simulations of single ion impacts. A consistent description of pattern formation in terms of atomic scale mechanisms is given. Most surprisingly, pattern formation depends crucially on the angle of incidence of the ions. As soon as this angle allows subsurface channeling of the ions, pattern regularity and alignment with respect to the ion beam greatly improves. These effects are traced back to the positionally aligned formation of vacancy islands through the damage created by the ions at dechanneling locations.

  4. Cation-Dependent Emergence of Regular Motion of a Float.

    PubMed

    Yasui, Daisuke; Yamashita, Hirofumi; Yamamoto, Daigo; Shioi, Akihisa

    2015-10-13

    We report a unique ion-dependent motion of a float at an oil/water interface. The type of motion depended on the cation species that was dissolved in the water. Irregular vibrations occurred when the water contained Ca(2+), back-and-forth motion occurred when the water contained Fe(2+), a type of motion intermediate between these occurred when the water contained Mn(2+), and intermittent long-distance travel occurred when the water contained Fe(3+). This is one of the simplest systems that can be used to show how macroscopic regular motion emerges depending on specific chemicals, which is one of the central issues in the study of biological and biomimetic motions. PMID:26393274

  5. Regularity underlies erratic population abundances in marine ecosystems.

    PubMed

    Sun, Jie; Cornelius, Sean P; Janssen, John; Gray, Kimberly A; Motter, Adilson E

    2015-06-01

    The abundance of a species' population in an ecosystem is rarely stationary, often exhibiting large fluctuations over time. Using historical data on marine species, we show that the year-to-year fluctuations of population growth rate obey a well-defined double-exponential (Laplace) distribution. This striking regularity allows us to devise a stochastic model despite seemingly irregular variations in population abundances. The model identifies the effect of reduced growth at low population density as a key factor missed in current approaches of population variability analysis and without which extinction risks are severely underestimated. The model also allows us to separate the effect of demographic stochasticity and show that single-species growth rates are dominantly determined by stochasticity common to all species. This dominance-and the implications it has for interspecies correlations, including co-extinctions-emphasizes the need for ecosystem-level management approaches to reduce the extinction risk of the individual species themselves.

  6. Pauli-Villars Regularization of Non-Abelian Gauge Theories

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2016-07-01

    As an extension of earlier work on QED, we construct a BRST-invariant Lagrangian for SU(N) Yang-Mills theory with fundamental matter, regulated by the inclusion of massive Pauli-Villars (PV) gluons and PV quarks. The underlying gauge symmetry for massless PV gluons is generalized to accommodate the PV-index-changing currents that are required by the regularization. Auxiliary adjoint scalars are used, in a mechanism due to Stueckelberg, to attribute mass to the PV gluons and the PV quarks. The addition of Faddeev-Popov ghosts then establishes a residual BRST symmetry. Although there are drawbacks to the approach, in particular the computational load of a large number of PV fields and a nonlocal interaction of the ghost fields, this formulation could provide a foundation for renormalizable nonperturbative solutions of light-front QCD in an arbitrary covariant gauge.

  7. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    PubMed

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network.

  8. Scene recognition by manifold regularized deep learning architecture.

    PubMed

    Yuan, Yuan; Mou, Lichao; Lu, Xiaoqiang

    2015-10-01

    Scene recognition is an important problem in the field of computer vision, because it helps to narrow the gap between the computer and the human beings on scene understanding. Semantic modeling is a popular technique used to fill the semantic gap in scene recognition. However, most of the semantic modeling approaches learn shallow, one-layer representations for scene recognition, while ignoring the structural information related between images, often resulting in poor performance. Modeled after our own human visual system, as it is intended to inherit humanlike judgment, a manifold regularized deep architecture is proposed for scene recognition. The proposed deep architecture exploits the structural information of the data, making for a mapping between visible layer and hidden layer. By the proposed approach, a deep architecture could be designed to learn the high-level features for scene recognition in an unsupervised fashion. Experiments on standard data sets show that our method outperforms the state-of-the-art used for scene recognition.

  9. Area (or entropy) product formula for a regular black hole

    NASA Astrophysics Data System (ADS)

    Pradhan, Parthapratim

    2016-02-01

    We compute the area (or entropy) product formula for a regular black hole derived by Ayón-Beato and García (Phys Rev Lett 80:5056, 1998). By explicit and exact calculation, it is shown that the entropy product formula of two physical horizons strictly depends upon the ADM mass parameter that means it is not an universal (mass-independent) quantity. But a slightly more complicated function of event horizon area and Cauchy horizon area is indeed a mass-independent quantity. We also compute other thermodynamic properties of the said black hole. We further study the stability of such black hole by computing the specific heat for both the horizons. It has been observed that under certain condition the black hole possesses second order phase transition. The pictorial diagram of the phase transition is given.

  10. Chiral Thirring–Wess model with Faddeevian regularization

    SciTech Connect

    Rahaman, Anisur

    2015-03-15

    Replacing vector type of interaction of the Thirring–Wess model by the chiral type a new model is presented which is termed here as chiral Thirring–Wess model. Ambiguity parameters of regularization are so chosen that the model falls into the Faddeevian class. The resulting Faddeevian class of model in general does not possess Lorentz invariance. However we can exploit the arbitrariness admissible in the ambiguity parameters to relate the quantum mechanically generated ambiguity parameters with the classical parameter involved in the masslike term of the gauge field which helps to maintain physical Lorentz invariance instead of the absence of manifestly Lorentz covariance of the model. The phase space structure and the theoretical spectrum of this class of model have been determined through Dirac’s method of quantization of constraint system.

  11. Improved biological network reconstruction using graph Laplacian regularization.

    PubMed

    Freschi, Valerio

    2011-08-01

    Biological networks reconstruction is a crucial step towards the functional characterization and elucidation of living cells. Computational methods for inferring the structure of these networks are of paramount importance since they provide valuable information regarding organization and behavior of the cell at a system level and also enable careful design of wet-lab experiments. Despite many recent advances, according to the scientific literature, there is room for improvements from both the efficiency and the accuracy point of view in link prediction algorithms. In this article, we propose a new method for the inference of biological networks that makes use of a notion of similarity between graph vertices within the framework of graph regularization for ranking the links to be predicted. The proposed approach results in more accurate classification rates in a wide range of experiments, while the computational complexity is reduced by two orders of magnitude with respect to many current state-of-the-art algorithms.

  12. Human behavioral regularity, fractional Brownian motion, and exotic phase transition

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Yang, Guang; An, Kenan; Huang, Jiping

    2016-08-01

    The mix of competition and cooperation (C&C) is ubiquitous in human society, which, however, remains poorly explored due to the lack of a fundamental method. Here, by developing a Janus game for treating C&C between two sides (suppliers and consumers), we show, for the first time, experimental and simulation evidences for human behavioral regularity. This property is proved to be characterized by fractional Brownian motion associated with an exotic transition between periodic and nonperiodic phases. Furthermore, the periodic phase echoes with business cycles, which are well-known in reality but still far from being well understood. Our results imply that the Janus game could be a fundamental method for studying C&C among humans in society, and it provides guidance for predicting human behavioral activity from the perspective of fractional Brownian motion.

  13. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  14. Identification of cyclic nucleotide gated channels using regular expressions.

    PubMed

    Zelman, Alice K; Dawe, Adam; Berkowitz, Gerald A

    2013-01-01

    Cyclic nucleotide-gated channels (CNGCs) are nonselective cation channels found in plants, animals, and some bacteria. They have a six-transmembrane/one-pore structure, a cytosolic cyclic nucleotide-binding domain, and a cytosolic calmodulin-binding domain. Despite their functional similarities, the plant CNGC family members appear to have different conserved amino acid motifs within corresponding functional domains than animal and bacterial CNGCs do. Here we describe the development and application of methods employing plant CNGC-specific sequence motifs as diagnostic tools to identify novel candidate channels in different plants. These methods are used to evaluate the validity of annotations of putative orthologs of CNGCs from plant genomes. The methods detail how to employ regular expressions of conserved amino acids in functional domains of annotated CNGCs and together with Web tools such as PHI-BLAST and ScanProsite to identify novel candidate CNGCs in species including Physcomitrella patens.

  15. Compressive sensing via nonlocal low-rank regularization.

    PubMed

    Dong, Weisheng; Shi, Guangming; Li, Xin; Ma, Yi; Huang, Feng

    2014-08-01

    Sparsity has been widely exploited for exact reconstruction of a signal from a small number of random measurements. Recent advances have suggested that structured or group sparsity often leads to more powerful signal reconstruction techniques in various compressed sensing (CS) studies. In this paper, we propose a nonlocal low-rank regularization (NLR) approach toward exploiting structured sparsity and explore its application into CS of both photographic and MRI images. We also propose the use of a nonconvex log det ( X) as a smooth surrogate function for the rank instead of the convex nuclear norm and justify the benefit of such a strategy using extensive experiments. To further improve the computational efficiency of the proposed algorithm, we have developed a fast implementation using the alternative direction multiplier method technique. Experimental results have shown that the proposed NLR-CS algorithm can significantly outperform existing state-of-the-art CS techniques for image recovery.

  16. Regular paths in SparQL: querying the NCI Thesaurus.

    PubMed

    Detwiler, Landon T; Suciu, Dan; Brinkley, James F

    2008-01-01

    OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.

  17. Multimodal manifold-regularized transfer learning for MCI conversion prediction.

    PubMed

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-12-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.

  18. Inference for survival prediction under the regularized Cox model.

    PubMed

    Sinnott, Jennifer A; Cai, Tianxi

    2016-10-01

    When a moderate number of potential predictors are available and a survival model is fit with regularization to achieve variable selection, providing accurate inference on the predicted survival can be challenging. We investigate inference on the predicted survival estimated after fitting a Cox model under regularization guaranteeing the oracle property. We demonstrate that existing asymptotic formulas for the standard errors of the coefficients tend to underestimate the variability for some coefficients, while typical resampling such as the bootstrap tends to overestimate it; these approaches can both lead to inaccurate variance estimation for predicted survival functions. We propose a two-stage adaptation of a resampling approach that brings the estimated error in line with the truth. In stage 1, we estimate the coefficients in the observed data set and in [Formula: see text] resampled data sets, and allow the resampled coefficient estimates to vote on whether each coefficient should be 0. For those coefficients voted as zero, we set both the point and interval estimates to [Formula: see text] In stage 2, to make inference about coefficients not voted as zero in stage 1, we refit the penalized model in the observed data and in the [Formula: see text] resampled data sets with only variables corresponding to those coefficients. We demonstrate that ensemble voting-based point and interval estimators of the coefficients perform well in finite samples, and prove that the point estimator maintains the oracle property. We extend this approach to derive inference procedures for survival functions and demonstrate that our proposed interval estimation procedures substantially outperform estimators based on asymptotic inference or standard bootstrap. We further illustrate our proposed procedures to predict breast cancer survival in a gene expression study. PMID:27107008

  19. Inference for survival prediction under the regularized Cox model.

    PubMed

    Sinnott, Jennifer A; Cai, Tianxi

    2016-10-01

    When a moderate number of potential predictors are available and a survival model is fit with regularization to achieve variable selection, providing accurate inference on the predicted survival can be challenging. We investigate inference on the predicted survival estimated after fitting a Cox model under regularization guaranteeing the oracle property. We demonstrate that existing asymptotic formulas for the standard errors of the coefficients tend to underestimate the variability for some coefficients, while typical resampling such as the bootstrap tends to overestimate it; these approaches can both lead to inaccurate variance estimation for predicted survival functions. We propose a two-stage adaptation of a resampling approach that brings the estimated error in line with the truth. In stage 1, we estimate the coefficients in the observed data set and in [Formula: see text] resampled data sets, and allow the resampled coefficient estimates to vote on whether each coefficient should be 0. For those coefficients voted as zero, we set both the point and interval estimates to [Formula: see text] In stage 2, to make inference about coefficients not voted as zero in stage 1, we refit the penalized model in the observed data and in the [Formula: see text] resampled data sets with only variables corresponding to those coefficients. We demonstrate that ensemble voting-based point and interval estimators of the coefficients perform well in finite samples, and prove that the point estimator maintains the oracle property. We extend this approach to derive inference procedures for survival functions and demonstrate that our proposed interval estimation procedures substantially outperform estimators based on asymptotic inference or standard bootstrap. We further illustrate our proposed procedures to predict breast cancer survival in a gene expression study.

  20. Multimodal manifold-regularized transfer learning for MCI conversion prediction.

    PubMed

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-12-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods. PMID:25702248

  1. Multimodal manifold-regularized transfer learning for MCI conversion prediction

    PubMed Central

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-01-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods. PMID:25702248

  2. PDE regularization for Bayesian reconstruction of emission tomography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Zhang, Li; Xing, Yuxiang; Zhao, Ziran

    2008-03-01

    The aim of the present study is to investigate a type of Bayesian reconstruction which utilizes partial differential equations (PDE) image models as regularization. PDE image models are widely used in image restoration and segmentation. In a PDE model, the image can be viewed as the solution of an evolutionary differential equation. The variation of the image can be regard as a descent of an energy function, which entitles us to use PDE models in Bayesian reconstruction. In this paper, two PDE models called anisotropic diffusion are studied. Both of them have the characteristics of edge-preserving and denoising like the popular median root prior (MRP). We use PDE regularization with an Ordered Subsets accelerated Bayesian one step late (OSL) reconstruction algorithm for emission tomography. The OS accelerated OSL algorithm is more practical than a non-accelerated one. The proposed algorithm is called OSEM-PDE. We validated the OSEM-PDE using a Zubal phantom in numerical experiments with attenuation correction and quantum noise considered, and the results are compared with OSEM and an OS version of MRP (OSEM-MRP) reconstruction. OSEM-PDE shows better results both in bias and variance. The reconstruction images are smoother and have sharper edges, thus are more applicable for post processing such as segmentation. We validate this using a k-means segmentation algorithm. The classic OSEM is not convergent especially in noisy condition. However, in our experiment, OSEM-PDE can benefit from OS acceleration and keep stable and convergent while OSEM-MRP failed to converge.

  3. Mathematical strategies for filtering complex systems: Regularly spaced sparse observations

    SciTech Connect

    Harlim, J. Majda, A.J.

    2008-05-01

    Real time filtering of noisy turbulent signals through sparse observations on a regularly spaced mesh is a notoriously difficult and important prototype filtering problem. Simpler off-line test criteria are proposed here as guidelines for filter performance for these stiff multi-scale filtering problems in the context of linear stochastic partial differential equations with turbulent solutions. Filtering turbulent solutions of the stochastically forced dissipative advection equation through sparse observations is developed as a stringent test bed for filter performance with sparse regular observations. The standard ensemble transform Kalman filter (ETKF) has poor skill on the test bed and even suffers from filter divergence, surprisingly, at observable times with resonant mean forcing and a decaying energy spectrum in the partially observed signal. Systematic alternative filtering strategies are developed here including the Fourier Domain Kalman Filter (FDKF) and various reduced filters called Strongly Damped Approximate Filter (SDAF), Variance Strongly Damped Approximate Filter (VSDAF), and Reduced Fourier Domain Kalman Filter (RFDKF) which operate only on the primary Fourier modes associated with the sparse observation mesh while nevertheless, incorporating into the approximate filter various features of the interaction with the remaining modes. It is shown below that these much cheaper alternative filters have significant skill on the test bed of turbulent solutions which exceeds ETKF and in various regimes often exceeds FDKF, provided that the approximate filters are guided by the off-line test criteria. The skill of the various approximate filters depends on the energy spectrum of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a precise fashion elucidated here.

  4. Preference for luminance histogram regularities in natural scenes.

    PubMed

    Graham, Daniel; Schwarz, Bianca; Chatterjee, Anjan; Leder, Helmut

    2016-03-01

    Natural scene luminance distributions typically have positive skew, and for single objects, there is evidence that higher skew is a correlate (but not a guarantee) of glossiness. Skewness is also relevant to aesthetics: preference for glossy single objects (with high skew) has been shown even in infants, and skewness is a good predictor of fruit freshness. Given that primate vision appears to efficiently encode natural scene luminance variation, and given evidence that natural scene regularities may be a prerequisite for aesthetic perception in the spatial domain, here we ask whether humans in general prefer natural scenes with more positively skewed luminance distributions. If humans generally prefer images with the higher-order regularities typical of natural scenes and/or shiny objects, we would expect this to be the case. By manipulating luminance distribution skewness (holding mean and variance constant) for individual natural images, we show that in fact preference varies inversely with increasing positive skewness. This finding holds for: artistic landscape images and calibrated natural scenes; scenes with and without glossy surfaces; landscape scenes and close-up objects; and noise images with natural luminance histograms. Across conditions, humans prefer images with skew near zero over higher skew images, and they prefer skew lower than that of the unmodified scenes. These results suggest that humans prefer images with luminances that are distributed relatively evenly about the mean luminance, i.e., images with similar amounts of light and dark. We propose that our results reflect an efficient processing advantage of low-skew images over high-skew images, following evidence from prior brain imaging results.

  5. Regular Football Practice Improves Autonomic Cardiac Function in Male Children

    PubMed Central

    Fernandes, Luis; Oliveira, Jose; Soares-Miranda, Luisa; Rebelo, Antonio; Brito, Joao

    2015-01-01

    Background: The role of the autonomic nervous system (ANS) in the cardiovascular regulation is of primal importance. Since it has been associated with adverse conditions such as cardiac arrhythmias, sudden death, sleep disorders, hypertension and obesity. Objectives: The present study aimed to investigate the impact of recreational football practice on the autonomic cardiac function of male children, as measured by heart rate variability. Patients and Methods: Forty-seven male children aged 9 - 12 years were selected according to their engagement with football oriented practice outside school context. The children were divided into a football group (FG; n = 22) and a control group (CG; n = 25). The FG had regular football practices, with 2 weekly training sessions and occasional weekend matches. The CG was not engaged with any physical activity other than complementary school-based physical education classes. Data from physical activity, physical fitness, and heart rate variability measured in time and frequency domains were obtained. Results: The anthropometric and body composition characteristics were similar in both groups (P > 0.05). The groups were also similar in time spent daily on moderate-to-vigorous physical activities (FG vs. CG: 114 ± 64 vs. 87 ± 55 minutes; P > 0.05). However, the FG performed better (P < 0.05) in Yo-Yo intermittent endurance test (1394 ± 558 vs. 778 ± 408 m) and 15-m sprint test (3.06 ± 0.17 vs. 3.20 ± 0.23 s). Also, the FG presented enhanced autonomic function. Significant differences were detected (P < 0.05) between groups for low frequency normalized units (38.0 ± 15.2 vs. 47.3 ± 14.2 n.u (normalized units)), high frequency normalized units (62.1 ± 15.2 vs. 52.8 ± 14.2 n.u.), and LF:HF ratio (0.7 ± 0.4 vs. 1.1 ± 0.6 ms2). Conclusions: Children engaged with regular football practice presented enhanced physical fitness and autonomic function, by increasing vagal tone at rest. PMID:26448848

  6. Regular Rehearsal Helps in Consolidation of Long Term Memory

    PubMed Central

    Parle, Milind; Singh, Nirmal; Vasudevan, Mani

    2006-01-01

    Memory, one of the most complex functions of the brain comprises of multiple components such as perception, registration, consolidation, storage, retrieval and decay. The present study was undertaken to evaluate the impact of different training sessions on the retention capacity of rats. The capacity of retention of learnt task was measured using exteroceptive behavioral models such as Hexagonal swimming pool apparatus, Hebb-Williams maze and Elevated plus-maze. A total of 150 rats divided into fifteen groups were employed in the present study. The animals were subjected to different training sessions during first three days. The ability to retain the learned task was tested after single, sub-acute, acute, sub-chronic and chronic exposure to above exteroceptive memory models in separate groups of animals. The memory score of all animals was recorded after 72 h, 192 h and 432 h of their last training trial. Rats of single exposure group did not show any effect on memory. Sub-acute training group animals showed improved memory up to 72 h only, where as in acute and sub-chronic training groups this memory improvement was extended up to 192 h. The rats, which were subjected to chronic exposures showed a significant improvement in retention capacity that lasted up to a period of eighteen days. These observations suggest that repeated rehearsals at regular intervals are probably necessary for consolidation of long-term memory. It was observed that sub-acute, acute and sub-chronic exposures, improved the retrieval ability of rats but this memory improving effect was short lived. Thus, rehearsal or training plays a crucial role in enhancing one’s capacity of retaining the learnt information. Key PointsThe present study underlines the importance of regular rehearsals in enhancing one’s capacity of retaining the learnt information. “ Sub-acute, acute & sub-chronic rehearsals result in storing of information for a limited period of time.Quick decay of information or

  7. Zeroth-order regular approximation approach to molecular parity violation

    SciTech Connect

    Berger, Robert; Langermann, Norbert; Wuellen, Christoph van

    2005-04-01

    We present an ab initio (quasirelativistic) two-component approach to the computation of molecular parity-violating effects which is based on the zeroth-order regular approximation (ZORA). As a first application, we compute the parity-violating energy differences between various P and M conformations of C{sub 2}-symmetric molecules belonging to the series H{sub 2}X{sub 2} with X=O, S, Se, Te, Po. The results are compared to previously reported (relativistic) four-component Dirac-Hartree-Fock-Coulomb (DHFC) data. Relative deviations between ZORA and DHFC values are well below 2% for diselane and the heavier homologs whereas somewhat larger relative deviations are observed for the lighter homologs. The larger deviations for lighter systems are attributed to the (nonlocal) exchange terms coupling large and small components, which have been neglected in the present ZORA implementation. For heavier systems these play a minor role, which explains the good performance of the ZORA approach. An excellent performance, even for lighter systems, is expected for a related density-functional-theory-based ZORA because then the exchange terms coupling large and small components are absent.

  8. Tracked regularized ultrasound elastography for targeting breast radiotherapy.

    PubMed

    Rivaz, Hassan; Foroughi, Pezhman; Fleming, Ioana; Zellars, Richard; Boctor, Emad; Hager, Gregory

    2009-01-01

    Tracked ultrasound elastography can be used for guidance in partial breast radiotherapy by visualizing the hard scar tissue around the lumpectomy cavity. For clinical success, the elastography method needs to be robust to the sources of decorrelation between ultrasound images, specifically fluid motions inside the cavity, change of the appearance of speckles caused by compression or physiologic motions, and out-of-plane motion of the probe. In this paper, we present a novel elastography technique that is based on analytic minimization of a regularized cost function. The cost function incorporates similarity of RF data intensity and displacement continuity, making the method robust to small decorrelations present throughout the image. We also exploit techniques from robust statistics to make the method resistant to large decorrelations caused by sources such as fluid motion. The analytic displacement estimation works in real-time. Moreover, the tracked data, used for targeting the radiotherapy, is exploited for discarding frames with excessive out-of-plane motion. Simulation, phantom and patient results are presented.

  9. Can static regular black holes form from gravitational collapse?

    NASA Astrophysics Data System (ADS)

    Zhang, Yiyang; Zhu, Yiwei; Modesto, Leonardo; Bambi, Cosimo

    2015-02-01

    Starting from the Oppenheimer-Snyder model, we know how in classical general relativity the gravitational collapse of matter forms a black hole with a central spacetime singularity. It is widely believed that the singularity must be removed by quantum-gravity effects. Some static quantum-inspired singularity-free black hole solutions have been proposed in the literature, but when one considers simple examples of gravitational collapse the classical singularity is replaced by a bounce, after which the collapsing matter expands for ever. We may expect three possible explanations: (i) the static regular black hole solutions are not physical, in the sense that they cannot be realized in Nature, (ii) the final product of the collapse is not unique, but it depends on the initial conditions, or (iii) boundary effects play an important role and our simple models miss important physics. In the latter case, after proper adjustment, the bouncing solution would approach the static one. We argue that the "correct answer" may be related to the appearance of a ghost state in de Sitter spacetimes with super Planckian mass. Our black holes have indeed a de Sitter core and the ghost would make these configurations unstable. Therefore we believe that these black hole static solutions represent the transient phase of a gravitational collapse but never survive as asymptotic states.

  10. Deconvolution based photoacoustic reconstruction for directional transducer with sparsity regularization

    NASA Astrophysics Data System (ADS)

    Moradi, Hamid; Tang, Shuo; Salcudean, Septimiu E.

    2016-03-01

    We define a deconvolution based photoacoustic reconstruction with sparsity regularization (DPARS) algorithm for image restoration from projections. The proposed method is capable of visualizing tissue in the presence of constraints such as the specific directivity of sensors and limited-view Photoacoustic Tomography (PAT). The directivity effect means that our algorithm treats the optically-generated ultrasonic waves based on which direction they arrive at the transducer. Most PA image reconstruction methods assume that sensors have omni-directional response; however, in practice, the sensors show higher sensitivity to the ultrasonic waves coming from one specific direction. In DPARS, the sensitivity of the transducer to incoming waves from different directions are considered. Thus, the DPARS algorithm takes into account the relative location of the absorbers with respect to the transducers, and generates a linear system of equations to solve for the distribution of absorbers. The numerical conditioning and computing times are improved by the use of a sparse Discrete Fourier Transform (DCT) representation of the distribution of absorption coefficients. Our simulation results show that DPARS outperforms the conventional Delay-and-Sum reconstruction method in terms of CNR and RMS errors. Experimental results confirm that DPARS provides images with higher resolution than DAS.

  11. The effect of regular exercise on cognitive functioning and personality.

    PubMed Central

    Young, R. J.

    1979-01-01

    The effect of regular exercise on cognitive functioning and personality was investigated in 32 subjects representing 4 discrete groups based on sex and age. Before and after a 10 week exercise programme of jogging, calisthenics, and recreational activities, a test battery was administered to assess functioning in a number of domains: intelligence (WAIS Digit Symbol and Block Design); brain function (Trail-Making); speed of performance (Crossing-Off); memory and learning (WMS Visual Reproduction and Associate Learning); morale and life satisfaction (Life Satisfaction and Control Ratings); anxiety (MAACL); and depression (MAACL). Improvement was observed on several physiological parameters. ANOVA revealed significant sex and age differences on Digit Symbol and Block Design and age differences on Trail-Making, Crossing-Off, Associate Learning, and anxiety. Regardless of sex and age, significant improvement in performance was observed from pre to post-test on Digit Symbol, Block Design, Trail-Making, Crossing-Off, and on Associate Learning. In addition, an increase on health status rating (p less than .01) and decrease in anxiety were observed from pre to post-test. These data illustrate beneficial effects of exercise on certain measures of cognitive functioning and personality. PMID:486882

  12. Satisfiability Threshold for Random Regular nae-sat

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Sly, Allan; Sun, Nike

    2016-01-01

    We consider the random regular k- nae- sat problem with n variables, each appearing in exactly d clauses. For all k exceeding an absolute constant {{k}_0}, we establish explicitly the satisfiability threshold {{{d_star} equiv {d_star(k)}}}. We prove that for {{d < d_star}} the problem is satisfiable with high probability, while for {{d > d_star}} the problem is unsatisfiable with high probability. If the threshold {{d_star}} lands exactly on an integer, we show that the problem is satisfiable with probability bounded away from both zero and one. This is the first result to locate the exact satisfiability threshold in a random constraint satisfaction problem exhibiting the condensation phenomenon identified by Krzakała et al. [Proc Natl Acad Sci 104(25):10318-10323, 2007]. Our proof verifies the one-step replica symmetry breaking formalism for this model. We expect our methods to be applicable to a broad range of random constraint satisfaction problems and combinatorial problems on random graphs.

  13. How color, regularity, and good Gestalt determine backward masking.

    PubMed

    Sayim, Bilge; Manassi, Mauro; Herzog, Michael

    2014-06-18

    The strength of visual backward masking depends on the stimulus onset asynchrony (SOA) between target and mask. Recently, it was shown that the conjoint spatial layout of target and mask is as crucial as SOA. Particularly, masking strength depends on whether target and mask group with each other. The same is true in crowding where the global spatial layout of the flankers and target-flanker grouping determine crowding strength. Here, we presented a vernier target followed by different flanker configurations at varying SOAs. Similar to crowding, masking of a red vernier target was strongly reduced for arrays of 10 green compared with 10 red flanking lines. Unlike crowding, single green lines flanking the red vernier showed strong masking. Irregularly arranged flanking lines yielded stronger masking than did regularly arranged lines, again similar to crowding. While cuboid flankers reduced crowding compared with single lines, this was not the case in masking. We propose that, first, masking is reduced when the flankers are part of a larger spatial structure. Second, spatial factors counteract color differences between the target and the flankers. Third, complex Gestalts, such as cuboids, seem to need longer processing times to show ungrouping effects as observed in crowding. Strong parallels between masking and crowding suggest similar underlying mechanism; however, temporal factors in masking additionally modulate performance, acting as an additional grouping cue.

  14. Nonergodic Phases in Strongly Disordered Random Regular Graphs

    NASA Astrophysics Data System (ADS)

    Altshuler, B. L.; Cuevas, E.; Ioffe, L. B.; Kravtsov, V. E.

    2016-10-01

    We combine numerical diagonalization with semianalytical calculations to prove the existence of the intermediate nonergodic but delocalized phase in the Anderson model on disordered hierarchical lattices. We suggest a new generalized population dynamics that is able to detect the violation of ergodicity of the delocalized states within the Abou-Chakra, Anderson, and Thouless recursive scheme. This result is supplemented by statistics of random wave functions extracted from exact diagonalization of the Anderson model on ensemble of disordered random regular graphs (RRG) of N sites with the connectivity K =2 . By extrapolation of the results of both approaches to N →∞ we obtain the fractal dimensions D1(W ) and D2(W ) as well as the population dynamics exponent D (W ) with the accuracy sufficient to claim that they are nontrivial in the broad interval of disorder strength WE1 05 reveals a singularity in D1 ,2(W ) dependencies which provides clear evidence for the first order transition between the two delocalized phases on RRG at WE≈10.0 . We discuss the implications of these results for quantum and classical nonintegrable and many-body systems.

  15. Efficient surface reconstruction from noisy data using regularized membrane potentials.

    PubMed

    Jalba, Andrei C; Roerdink, Jos B T M

    2009-05-01

    A physically motivated method for surface reconstruction is proposed that can recover smooth surfaces from noisy and sparse data sets. No orientation information is required. By a new technique based on regularized-membrane potentials the input sample points are aggregated, leading to improved noise tolerability and outlier removal, without sacrificing much with respect to detail (feature) recovery. After aggregating the sample points on a volumetric grid, a novel, iterative algorithm is used to classify grid points as exterior or interior to the surface. This algorithm relies on intrinsic properties of the smooth scalar field on the grid which emerges after the aggregation step. Second, a mesh-smoothing paradigm based on a mass-spring system is introduced. By enhancing this system with a bending-energy minimizing term we ensure that the final triangulated surface is smoother than piecewise linear. In terms of speed and flexibility, the method compares favorably with respect to previous approaches. Most parts of the method are implemented on modern graphics processing units (GPUs). Results in a wide variety of settings are presented, ranging from surface reconstruction on noise-free point clouds to grayscale image segmentation.

  16. Wavelet variance analysis for random fields on a regular lattice.

    PubMed

    Mondal, Debashis; Percival, Donald B

    2012-02-01

    There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.

  17. Nonlocal regularization of inverse problems: a unified variational framework

    PubMed Central

    Yang, Zhili; Jacob, Mathews

    2014-01-01

    We introduce a unifying energy minimization framework for nonlocal regularization of inverse problems. In contrast to the weighted sum of square differences between image pixels used by current schemes, the proposed functional is an unweighted sum of inter-patch distances. We use robust distance metrics that promote the averaging of similar patches, while discouraging the averaging of dissimilar patches. We show that the first iteration of a majorize-minimize algorithm to minimize the proposed cost function is similar to current non-local methods. The reformulation thus provides a theoretical justification for the heuristic approach of iterating non-local schemes, which re-estimate the weights from the current image estimate. Thanks to the reformulation, we now understand that the widely reported alias amplification associated with iterative non-local methods are caused by the convergence to local minimum of the nonconvex penalty. We introduce an efficient continuation strategy to overcome this problem. The similarity of the proposed criterion to widely used non-quadratic penalties (eg. total variation and `p semi-norms) opens the door to the adaptation of fast algorithms developed in the context of compressive sensing; we introduce several novel algorithms to solve the proposed non-local optimization problem. Thanks to the unifying framework, these fast algorithms are readily applicable for a large class of distance metrics. PMID:23014745

  18. Dynamically self-regular quantum harmonic black holes

    NASA Astrophysics Data System (ADS)

    Spallucci, Euro; Smailagic, Anais

    2015-04-01

    The recently proposed UV self-complete quantum gravity program is a new and very interesting way to envision Planckian/trans-Planckian physics. In this new framework, high energy scattering is dominated by the creation of micro black holes, and it is experimentally impossible to probe distances shorter than the horizon radius. In this letter we present a model which realizes this idea through the creation of self-regular quantum black holes admitting a minimal size extremal configuration. Their radius provides a dynamically generated minimal length acting as a universal short-distance cutoff. We propose a quantization scheme for this new kind of microscopic objects based on a Bohr-like approach, which does not require a detailed knowledge of quantum gravity. The resulting black hole quantum picture resembles the energy spectrum of a quantum harmonic oscillator. The mass of the extremal configuration plays the role of zero-point energy. Large quantum number re-establishes the classical black hole description. Finally, we also formulate a "quantum hoop conjecture" which is satisfied by all the mass eigenstates and sustains the existence of quantum black holes sourced by Gaussian matter distributions.

  19. Droplet impact on regular micro-grooved surfaces

    NASA Astrophysics Data System (ADS)

    Hu, Hai-Bao; Huang, Su-He; Chen, Li-Bin

    2013-08-01

    We have investigated experimentally the process of a droplet impact on a regular micro-grooved surface. The target surfaces are patterned such that micro-scale spokes radiate from the center, concentric circles, and parallel lines on the polishing copper plate, using Quasi-LIGA molding technology. The dynamic behavior of water droplets impacting on these structured surfaces is examined using a high-speed camera, including the drop impact processes, the maximum spreading diameters, and the lengths and numbers of fingers at different values of Weber number. Experimental results validate that the spreading processes are arrested on all target surfaces at low velocity. Also, the experimental results at higher impact velocity demonstrate that the spreading process is conducted on the surface parallel to the micro-grooves, but is arrested in the direction perpendicular to the micro-grooves. Besides, the lengths of fingers increase observably, even when they are ejected out as tiny droplets along the groove direction, at the same time the drop recoil velocity is reduced by micro-grooves which are parallel to the spreading direction, but not by micro-grooves which are vertical to the spreading direction.

  20. Dual conformal symmetry at loop level: massive regularization

    NASA Astrophysics Data System (ADS)

    Henn, Johannes M.

    2011-11-01

    Dual conformal symmetry has had a huge impact on our understanding of planar scattering amplitudes in {\\cal N}=4 super Yang-Mills. At tree level, it combines with the original conformal symmetry generators to a Yangian algebra, a hallmark of integrability, and helps in determining the tree-level amplitudes. The latter are now known in closed form. At loop level, it determines the functional form of the four- and five-point scattering amplitudes to all orders in the coupling constant and gives restrictions at six points and beyond. The symmetry is best understood at loop level in terms of a novel AdS-inspired infrared regularization which makes the symmetry exact, despite the infrared divergences. This has important consequences for the basis of loop integrals in this theory. Recently, a number of selective reviews have appeared which discuss dual conformal symmetry, mostly at tree level. Here, we give an up-to-date account of dual conformal symmetry, focussing on its status at loop level. Invited review for a special issue of Journal of Physics A: Mathematical and Theoretical devoted to 'Scattering amplitudes in gauge theories'.

  1. Infants with Williams syndrome detect statistical regularities in continuous speech.

    PubMed

    Cashon, Cara H; Ha, Oh-Ryeong; Graf Estes, Katharine; Saffran, Jenny R; Mervis, Carolyn B

    2016-09-01

    Williams syndrome (WS) is a rare genetic disorder associated with delays in language and cognitive development. The reasons for the language delay are unknown. Statistical learning is a domain-general mechanism recruited for early language acquisition. In the present study, we investigated whether infants with WS were able to detect the statistical structure in continuous speech. Eighteen 8- to 20-month-olds with WS were familiarized with 2min of a continuous stream of synthesized nonsense words; the statistical structure of the speech was the only cue to word boundaries. They were tested on their ability to discriminate statistically-defined "words" and "part-words" (which crossed word boundaries) in the artificial language. Despite significant cognitive and language delays, infants with WS were able to detect the statistical regularities in the speech stream. These findings suggest that an inability to track the statistical properties of speech is unlikely to be the primary basis for the delays in the onset of language observed in infants with WS. These results provide the first evidence of statistical learning by infants with developmental delays. PMID:27299804

  2. Regular and chaotic dynamics of a piecewise smooth bouncer

    SciTech Connect

    Langer, Cameron K. Miller, Bruce N.

    2015-07-15

    The dynamical properties of a particle in a gravitational field colliding with a rigid wall moving with piecewise constant velocity are studied. The linear nature of the wall's motion permits further analytical investigation than is possible for the system's sinusoidal counterpart. We consider three distinct approaches to modeling collisions: (i) elastic, (ii) inelastic with constant restitution coefficient, and (iii) inelastic with a velocity-dependent restitution function. We confirm the existence of distinct unbounded orbits (Fermi acceleration) in the elastic model, and investigate regular and chaotic behavior in the inelastic cases. We also examine in the constant restitution model trajectories wherein the particle experiences an infinite number of collisions in a finite time, i.e., the phenomenon of inelastic collapse. We address these so-called “sticking solutions” and their relation to both the overall dynamics and the phenomenon of self-reanimating chaos. Additionally, we investigate the long-term behavior of the system as a function of both initial conditions and parameter values. We find the non-smooth nature of the system produces novel bifurcation phenomena not seen in the sinusoidal model, including border-collision bifurcations. The analytical and numerical investigations reveal that although our piecewise linear bouncer is a simplified version of the sinusoidal model, the former not only captures essential features of the latter but also exhibits behavior unique to the discontinuous dynamics.

  3. Infrared radiative transfer through a regular array of cuboidal clouds

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN; Weinman, J. A.

    1981-01-01

    Infrared radiative transfer through a regular array of cuboidal clouds is studied and the interaction of the sides of the clouds with each other and the ground is considered. The theory is developed for black clouds and is extended to scattering clouds using a variable azimuth two-stream approximation. It is shown that geometrical considerations often dominate over the microphysical aspects of radiative transfer through the clouds. For example, the difference in simulated 10 micron brightness temperature between black isothermal cubic clouds and cubic clouds of optical depth 10, is less than 2 deg for zenith angles less than 50 deg for all cloud fractions when viewed parallel to the array. The results show that serious errors are made in flux and cooling rate computations if broken clouds are modeled as planiform. Radiances computed by the usual practice of area-weighting cloudy and clear sky radiances are in error by 2 to 8 K in brightness temperature for cubic clouds over a wide range of cloud fractions and zenith angles. It is also shown that the lapse rate does not markedly affect the exiting radiances for cuboidal clouds of unit aspect ratio and optical depth 10.

  4. [Persuasive communications and regular blood donation: an experimental study].

    PubMed

    Cunha, Balduino Guedes Fernandes da; Dias, Mardonio Rique

    2008-06-01

    This study aimed to: investigate yielding to the dependent variable "behavioral intent to become a regular blood donor", verify the impact of such communications on variance in the dependent variable, examine the single contribution of the external independent variable to the Rational Action Theory, and test the fit of the expanded Rational Choice Theory to the target behavior and sample. Only a post-test design and double-blinded procedure were used, randomly picking 405 university students for experimental groups 1 and 2, placebo control, and control only. The results showed: lack of yielding by the experimental groups; considerable percentage variance in the dependent variable explained by the independent variable in the experimental and placebo control groups; and satisfactory and significant correlations for variables in the expanded theory. Absence of yielding for the criterion variable was probably due to the time interval. The positive persuasive strategy accounted for the greatest variance in the dependent variable. Moral obligation showed the greatest impact on participants' intent to perform the behavior. The correlations corroborated the theoretical and methodological validity of the expanded theory.

  5. Transition from regular to chaotic motion in black hole magnetospheres

    NASA Astrophysics Data System (ADS)

    Kopáček, Ondřej

    2011-10-01

    Cosmic black holes can act as agents of particle acceleration. We study properties of a system consisting of a rotating black hole immersed in a large-scale organized magnetic field. Electrically charged particles in the immediate neighborhood of the horizon are influenced by strong gravity acting together with magnetic and induced electric components. We relax several constraints which were often imposed in previous works: the magnetic field does not have to share a common symmetry axis with the spin of the black hole but they can be inclined with respect to each other, thus violating the axial symmetry. Also, the black hole does not have to remain at rest but it can instead perform fast translational motion together with rotation. We demonstrate that the generalization brings new effects. Starting from uniform electro-vacuum fields in the curved spacetime, we find separatrices and identify magnetic neutral points forming in certain circumstances. We suggest that these structures can represent signatures of magnetic reconnection triggered by frame-dragging effects in the ergosphere. We further investigate the motion of charged particles in these black hole magnetospheres. We concentrate on the transition from the regular motion to chaos, and in this context we explore the characteristics of chaos in relativity. For the first time, we apply recurrence plots as a suitable technique to quantify the degree of chaoticness near a black hole.

  6. Regularized MANOVA (rMANOVA) in untargeted metabolomics.

    PubMed

    Engel, J; Blanchet, L; Bloemen, B; van den Heuvel, L P; Engelke, U H F; Wevers, R A; Buydens, L M C

    2015-10-29

    Many advanced metabolomics experiments currently lead to data where a large number of response variables were measured while one or several factors were changed. Often the number of response variables vastly exceeds the sample size and well-established techniques such as multivariate analysis of variance (MANOVA) cannot be used to analyze the data. ANOVA simultaneous component analysis (ASCA) is an alternative to MANOVA for analysis of metabolomics data from an experimental design. In this paper, we show that ASCA assumes that none of the metabolites are correlated and that they all have the same variance. Because of these assumptions, ASCA may relate the wrong variables to a factor. This reduces the power of the method and hampers interpretation. We propose an improved model that is essentially a weighted average of the ASCA and MANOVA models. The optimal weight is determined in a data-driven fashion. Compared to ASCA, this method assumes that variables can correlate, leading to a more realistic view of the data. Compared to MANOVA, the model is also applicable when the number of samples is (much) smaller than the number of variables. These advantages are demonstrated by means of simulated and real data examples. The source code of the method is available from the first author upon request, and at the following github repository: https://github.com/JasperE/regularized-MANOVA.

  7. Evolutionary matching-pennies game on bipartite regular networks

    NASA Astrophysics Data System (ADS)

    Szabó, György; Varga, Levente; Borsos, István

    2014-04-01

    Evolutionary games are studied here with two types of players located on a chessboard or on a bipartite random regular graph. Each player's income comes from matching-pennies games played with the four neighbors. The players can modify their own strategies according to a myopic strategy update resembling the Glauber dynamics for the kinetic Ising model. This dynamical rule drives the system into a stationary state where the two strategies are present with the same probability without correlations between the nearest neighbors while a weak correlation is induced between the second and the third neighbors. In stationary states, the deviation from the detailed balance is quantified by the evaluation of entropy production. Finally, our analysis is extended to evolutionary games where the uniform pair interactions are composed of an anticoordination game and a weak matching-pennies game. This system preserves the Ising type order-disorder transitions at a critical noise level decreasing with the strength of the matching-pennies component for both networks.

  8. Regular black holes and noncommutative geometry inspired fuzzy sources

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinpei

    2016-05-01

    We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the space-time dimensions, and the existence of a void in the vicinity of the center of the space-time is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it.

  9. Regular synchrony lattices for product coupled cell networks.

    PubMed

    Aguiar, Manuela A D; Dias, Ana Paula S

    2015-01-01

    There are several ways for constructing (bigger) networks from smaller networks. We consider here the cartesian and the Kronecker (tensor) product networks. Our main aim is to determine a relation between the lattices of synchrony subspaces for a product network and the component networks of the product. In this sense, we show how to obtain the lattice of regular synchrony subspaces for a product network from the lattices of synchrony subspaces for the component networks. Specifically, we prove that a tensor of subspaces is of synchrony for the product network if and only if the subspaces involved in the tensor are synchrony subspaces for the component networks of the product. We also show that, in general, there are (irregular) synchrony subspaces for the product network that are not described by the synchrony subspaces for the component networks, concluding that, in general, it is not possible to obtain the all synchrony lattice for the product network from the corresponding lattices for the component networks. We also make the following remark concerning the fact that the cartesian and Kronecker products, as graph operations, are quite different, implying that the associated coupled cell systems have distinct structures. Although, the kinds of dynamics expected to occur are difficult to compare, we establish an inclusion relation between the lattices of synchrony subspaces for the cartesian and Kronecker products. PMID:25637919

  10. Grouping by closure influences subjective regularity and implicit preference

    PubMed Central

    Makin, Alexis; Pecchinenda, Anna; Bertamini, Marco

    2012-01-01

    A reflection between a pair of contours is more rapidly detected than a translation, but this effect is stronger when the contours are closed to form a single object compared to when they are closed to form 2 objects with a gap between them. That is, grouping changes the relative salience of different regularities. We tested whether this manipulation would also change preference for reflection or translation. We measured preference for these patterns using the Implicit Association Test (IAT). On some trials, participants saw words that were either positive or negative and had to classify them as quickly as possible. On interleaved trials, they saw reflection or translation patterns and again had to classify them. Participants were faster when 1 button was used for reflection and positive words and another button was used for translation and negative words, compared to when the reverse response mapping was used (translation and positive vs. reflection and negative). This reaction time difference indicates an implicit preference for reflection over translation. However, the size of the implicit preference was significantly reduced in the Two-objects condition. We concluded that factors that affect perceptual sensitivity also systematically affect implicit preference formation. PMID:23145305

  11. Long-term staff scheduling with regular temporal distribution.

    PubMed

    Carrasco, Rafael C

    2010-11-01

    Although optimal staff scheduling often requires elaborate computational methods, those cases which are not highly constrained can be efficiently solved using simpler approaches. This paper describes how a simple procedure, combining random and greedy strategies with heuristics, has been successfully applied in a Spanish hospital to assign guard shifts to the physicians in a department. In this case, the employees prefer that their guard duties are regularly distributed in time. The workload distribution must also satisfy some constraints: in particular, the distribution of duties among the staff must be uniform when a number of tasks and shift types (including some unfrequent and aperiodic types, such as those scheduled during long weekends) are considered. Furthermore, the composition of teams should be varied, in the sense that no particular pairing should dominate the assignments. The procedure proposed is able to find suitable solutions when the number of employees available for every task is not small compared to the number required at every shift. The software is distributed under the terms of the GNU General Public License.

  12. The effect of regular exercise on cognitive functioning and personality.

    PubMed

    Young, R J

    1979-09-01

    The effect of regular exercise on cognitive functioning and personality was investigated in 32 subjects representing 4 discrete groups based on sex and age. Before and after a 10 week exercise programme of jogging, calisthenics, and recreational activities, a test battery was administered to assess functioning in a number of domains: intelligence (WAIS Digit Symbol and Block Design); brain function (Trail-Making); speed of performance (Crossing-Off); memory and learning (WMS Visual Reproduction and Associate Learning); morale and life satisfaction (Life Satisfaction and Control Ratings); anxiety (MAACL); and depression (MAACL). Improvement was observed on several physiological parameters. ANOVA revealed significant sex and age differences on Digit Symbol and Block Design and age differences on Trail-Making, Crossing-Off, Associate Learning, and anxiety. Regardless of sex and age, significant improvement in performance was observed from pre to post-test on Digit Symbol, Block Design, Trail-Making, Crossing-Off, and on Associate Learning. In addition, an increase on health status rating (p less than .01) and decrease in anxiety were observed from pre to post-test. These data illustrate beneficial effects of exercise on certain measures of cognitive functioning and personality.

  13. Choice of the regularization parameter for perfusion quantification with MRI

    NASA Astrophysics Data System (ADS)

    Sourbron, S.; Luypaert, R.; Van Schuerbeek, P.; Dujardin, M.; Stadnik, T.

    2004-07-01

    Truncated singular value decomposition (TSVD) is an effective method for the deconvolution of dynamic contrast enhanced (DCE) MRI. Two robust methods for the selection of the truncation threshold on a pixel-by-pixel basis—generalized cross validation (GCV) and the L-curve criterion (LCC)—were optimized and compared to paradigms in the literature. GCV and LCC were found to perform optimally when applied with a smooth version of TSVD, known as standard form Tikhonov regularization (SFTR). The methods lead to improvements in the estimate of the residue function and of its maximum, and converge properly with SNR. The oscillations typically observed in the solution vanish entirely, and perfusion is more accurately estimated at small mean transit times. This results in improved image contrast and increased sensitivity to perfusion abnormalities, at the cost of 1-2 min in calculation time and hyperintense clusters in the image. Preliminary experience with clinical data suggests that the latter problem can be resolved using spatial continuity and/or hybrid thresholding methods. In the simulations GCV and LCC are equivalent in terms of performance, but GCV thresholding is faster.

  14. Regularized MANOVA (rMANOVA) in untargeted metabolomics.

    PubMed

    Engel, J; Blanchet, L; Bloemen, B; van den Heuvel, L P; Engelke, U H F; Wevers, R A; Buydens, L M C

    2015-10-29

    Many advanced metabolomics experiments currently lead to data where a large number of response variables were measured while one or several factors were changed. Often the number of response variables vastly exceeds the sample size and well-established techniques such as multivariate analysis of variance (MANOVA) cannot be used to analyze the data. ANOVA simultaneous component analysis (ASCA) is an alternative to MANOVA for analysis of metabolomics data from an experimental design. In this paper, we show that ASCA assumes that none of the metabolites are correlated and that they all have the same variance. Because of these assumptions, ASCA may relate the wrong variables to a factor. This reduces the power of the method and hampers interpretation. We propose an improved model that is essentially a weighted average of the ASCA and MANOVA models. The optimal weight is determined in a data-driven fashion. Compared to ASCA, this method assumes that variables can correlate, leading to a more realistic view of the data. Compared to MANOVA, the model is also applicable when the number of samples is (much) smaller than the number of variables. These advantages are demonstrated by means of simulated and real data examples. The source code of the method is available from the first author upon request, and at the following github repository: https://github.com/JasperE/regularized-MANOVA. PMID:26547490

  15. Regular synchrony lattices for product coupled cell networks.

    PubMed

    Aguiar, Manuela A D; Dias, Ana Paula S

    2015-01-01

    There are several ways for constructing (bigger) networks from smaller networks. We consider here the cartesian and the Kronecker (tensor) product networks. Our main aim is to determine a relation between the lattices of synchrony subspaces for a product network and the component networks of the product. In this sense, we show how to obtain the lattice of regular synchrony subspaces for a product network from the lattices of synchrony subspaces for the component networks. Specifically, we prove that a tensor of subspaces is of synchrony for the product network if and only if the subspaces involved in the tensor are synchrony subspaces for the component networks of the product. We also show that, in general, there are (irregular) synchrony subspaces for the product network that are not described by the synchrony subspaces for the component networks, concluding that, in general, it is not possible to obtain the all synchrony lattice for the product network from the corresponding lattices for the component networks. We also make the following remark concerning the fact that the cartesian and Kronecker products, as graph operations, are quite different, implying that the associated coupled cell systems have distinct structures. Although, the kinds of dynamics expected to occur are difficult to compare, we establish an inclusion relation between the lattices of synchrony subspaces for the cartesian and Kronecker products.

  16. A behavioral study of regularity, irregularity and rules in the English past tense.

    PubMed

    Magen, Harriet S

    2014-12-01

    Opposing views of storage and processing of morphologically complex words (e.g., past tense) have been suggested: the dual system, whereby regular forms are not in the lexicon but are generated by rule, while irregular forms are explicitly represented; the single system, whereby regular and irregular forms are computed by a single system, using associative connections; and a system whereby phonological rules relate both regular and irregular past to present tense forms. Two reaction time experiments investigated the production of the past tense in English in response to the auditory presentation of the present tense of the verb. The first experiment addressed the methodology of presenting regulars and irregulars in blocked form as in a previous study (Jaeger et al. in Language 72:451-497, 1996). Blocked presentation results showed longer RTs for the elicitation of irregular pasts than for regular pasts; however, there were no differences between regular and irregular elicitation when the presentation was randomized, indicating that it is rules that are being primed. The second experiment tested whether the response time advantage found for blocked regular verbs in the first experiment might also extend to irregular verb forms exhibiting the same sub-regularity (e.g., sing-sang may prime ring-rang). Results showed a trend towards slower RTs when past tense forms from different sub-regularities follow one another, suggesting interference between one sub-regularity and another.

  17. L1/2 regularization: a thresholding representation theory and a fast solver.

    PubMed

    Xu, Zongben; Chang, Xiangyu; Xu, Fengmin; Zhang, Hai

    2012-07-01

    The special importance of L1/2 regularization has been recognized in recent studies on sparse modeling (particularly on compressed sensing). The L1/2 regularization, however, leads to a nonconvex, nonsmooth, and non-Lipschitz optimization problem that is difficult to solve fast and efficiently. In this paper, through developing a threshoding representation theory for L1/2 regularization, we propose an iterative half thresholding algorithm for fast solution of L1/2 regularization, corresponding to the well-known iterative soft thresholding algorithm for L1 regularization, and the iterative hard thresholding algorithm for L0 regularization. We prove the existence of the resolvent of gradient of ||x||1/2(1/2), calculate its analytic expression, and establish an alternative feature theorem on solutions of L1/2 regularization, based on which a thresholding representation of solutions of L1/2 regularization is derived and an optimal regularization parameter setting rule is formulated. The developed theory provides a successful practice of extension of the well- known Moreau's proximity forward-backward splitting theory to the L1/2 regularization case. We verify the convergence of the iterative half thresholding algorithm and provide a series of experiments to assess performance of the algorithm. The experiments show that the half algorithm is effective, efficient, and can be accepted as a fast solver for L1/2 regularization. With the new algorithm, we conduct a phase diagram study to further demonstrate the superiority of L1/2 regularization over L1 regularization.

  18. Bounds for Maximum Likelihood Regular and Non-Regular DoA Estimation in K-Distributed Noise

    NASA Astrophysics Data System (ADS)

    Abramovich, Yuri I.; Besson, Olivier; Johnson, Ben A.

    2015-11-01

    We consider the problem of estimating the direction of arrival of a signal embedded in $K$-distributed noise, when secondary data which contains noise only are assumed to be available. Based upon a recent formula of the Fisher information matrix (FIM) for complex elliptically distributed data, we provide a simple expression of the FIM with the two data sets framework. In the specific case of $K$-distributed noise, we show that, under certain conditions, the FIM for the deterministic part of the model can be unbounded, while the FIM for the covariance part of the model is always bounded. In the general case of elliptical distributions, we provide a sufficient condition for unboundedness of the FIM. Accurate approximations of the FIM for $K$-distributed noise are also derived when it is bounded. Additionally, the maximum likelihood estimator of the signal DoA and an approximated version are derived, assuming known covariance matrix: the latter is then estimated from secondary data using a conventional regularization technique. When the FIM is unbounded, an analysis of the estimators reveals a rate of convergence much faster than the usual $T^{-1}$. Simulations illustrate the different behaviors of the estimators, depending on the FIM being bounded or not.

  19. Regular Soda Policies, School Availability, and High School Student Consumption

    PubMed Central

    Terry-McElrath, Yvonne M.; Chriqui, Jamie F.; O’Malley, Patrick M.; Chaloupka, Frank J.; Johnston, Lloyd D.

    2014-01-01

    Background Beginning in the 2014–2015 school year, all U.S. schools participating in federally reimbursable meal programs are required to implement new nutrition standards for items sold in competitive venues. Multilevel mediation modeling examining direct, mediated, and indirect pathways between policy, availability, and student consumption might provide insight into possible outcomes of implementing aspects of the new standards. Purpose To employ multilevel mediation modeling using state- and school district–level policies mandating school soda bans, school soda availability, and student soda consumption. Methods The 2010–2012 Monitoring the Future surveys obtained nationally representative data on high school student soda consumption; school administrators provided school soda availability data. State laws and district policies were compiled and coded. Analyses conducted in 2014 controlled for state-, school-, and student-level characteristics. Results State–district–school models found that state bans were associated with significantly lower school soda availability (c, p<0.05) but district bans showed no significant associations. No significant direct, mediated, or indirect associations between state policy and student consumption were observed for the overall sample. Among African American high school students, state policy was associated directly with significantly lower school soda availability (a, p<0.01), and—indirectly through lower school availability—with significantly lower soda consumption (a*b, p<0.05). Conclusions These analyses indicate state policy focused on regular soda strongly affected school soda availability, and worked through changes in school availability to decrease soda consumption among African American students, but not the overall population. PMID:25576493

  20. Hippocampal harms, protection and recovery following regular cannabis use.

    PubMed

    Yücel, M; Lorenzetti, V; Suo, C; Zalesky, A; Fornito, A; Takagi, M J; Lubman, D I; Solowij, N

    2016-01-12

    Shifting policies towards legalisation of cannabis for therapeutic and recreational use raise significant ethical issues for health-care providers seeking evidence-based recommendations. We investigated whether heavy cannabis use is associated with persistent harms to the hippocampus, if exposure to cannabidiol offers protection, and whether recovery occurs with abstinence. To do this, we assessed 111 participants: 74 long-term regular cannabis users (with an average of 15.4 years of use) and 37 non-user healthy controls. Cannabis users included subgroups of participants who were either exposed to Δ9-tetrahydrocannabinol (THC) but not to cannabidiol (CBD) or exposed to both, and former users with sustained abstinence. Participants underwent magnetic resonance imaging from which three measures of hippocampal integrity were assessed: (i) volume; (ii) fractional anisotropy; and (iii) N-acetylaspartate (NAA). Three curve-fitting models across the entire sample were tested for each measure to examine whether cannabis-related hippocampal harms are persistent, can be minimised (protected) by exposure to CBD or recovered through long-term abstinence. These analyses supported a protection and recovery model for hippocampal volume (P=0.003) and NAA (P=0.001). Further pairwise analyses showed that cannabis users had smaller hippocampal volumes relative to controls. Users not exposed to CBD had 11% reduced volumes and 15% lower NAA concentrations. Users exposed to CBD and former users did not differ from controls on any measure. Ongoing cannabis use is associated with harms to brain health, underpinned by chronic exposure to THC. However, such harms are minimised by CBD, and can be recovered with extended periods of abstinence.

  1. Entropy principle, non-regular processes, and generalized exploitation procedures

    NASA Astrophysics Data System (ADS)

    Triani, V.; Cimmelli, V. A.

    2012-06-01

    The classical Coleman-Noll approach to the exploitation of the entropy principle regards the classical balances of mass, linear and angular momentum and energy as differential constraints for the entropy inequality, and presupposes that the second law of thermodynamics is a restriction on the constitutive equations describing the material properties [B. D. Coleman and W. Noll, "The thermodynamics of elastic materials with heat conduction and viscosity," Arch. Rational Mech. Anal. 13, 167-178 (1963), 10.1007/BF01262690]. In 1996, Muschik and Ehrentraut proved that this presupposition may be confirmed by a rigorous proof, provided that an amendment to the classical second law of thermodynamics, which asserts that, except in equilibria, reversible process directions in state space do not exist, is postulated ["An amendment to the second law," J. Non-Equilib. Thermodyn. 21, 175-192 (1996), 10.1515/jnet.1996.21.2.175]. In their paper, the authors considered regular processes only. In a recent article [V. Triani and V. A. Cimmelli, "Interpretation of second law of thermodynamics in the presence of interfaces," Continuum. Mech. Thermodyn. 24, 165-174 (2012), 10.1007/s00161-011-0231-8], we proved that the result above remains valid in the presence of interfaces across which the unknown fields suffer jump discontinuities. Here, we show that the same conclusions achieved by Muschik and Ehrentraut and Triani and Cimmelli hold when the classical Coleman-Noll and Liu ["Method of Lagrange multipliers for exploitation of the entropy principle," Arch. Rational Mech. Anal. 46, 131-148 (1972), 10.1007/BF00250688] procedures for the exploitation of the second law, are generalized by considering also the gradients of the fundamental balance equations as constraints for the entropy inequality.

  2. Hippocampal harms, protection and recovery following regular cannabis use

    PubMed Central

    Yücel, M; Lorenzetti, V; Suo, C; Zalesky, A; Fornito, A; Takagi, M J; Lubman, D I; Solowij, N

    2016-01-01

    Shifting policies towards legalisation of cannabis for therapeutic and recreational use raise significant ethical issues for health-care providers seeking evidence-based recommendations. We investigated whether heavy cannabis use is associated with persistent harms to the hippocampus, if exposure to cannabidiol offers protection, and whether recovery occurs with abstinence. To do this, we assessed 111 participants: 74 long-term regular cannabis users (with an average of 15.4 years of use) and 37 non-user healthy controls. Cannabis users included subgroups of participants who were either exposed to Δ9-tetrahydrocannabinol (THC) but not to cannabidiol (CBD) or exposed to both, and former users with sustained abstinence. Participants underwent magnetic resonance imaging from which three measures of hippocampal integrity were assessed: (i) volume; (ii) fractional anisotropy; and (iii) N-acetylaspartate (NAA). Three curve-fitting models across the entire sample were tested for each measure to examine whether cannabis-related hippocampal harms are persistent, can be minimised (protected) by exposure to CBD or recovered through long-term abstinence. These analyses supported a protection and recovery model for hippocampal volume (P=0.003) and NAA (P=0.001). Further pairwise analyses showed that cannabis users had smaller hippocampal volumes relative to controls. Users not exposed to CBD had 11% reduced volumes and 15% lower NAA concentrations. Users exposed to CBD and former users did not differ from controls on any measure. Ongoing cannabis use is associated with harms to brain health, underpinned by chronic exposure to THC. However, such harms are minimised by CBD, and can be recovered with extended periods of abstinence. PMID:26756903

  3. Inference process of programmed attributed regular grammars for character recognition

    NASA Astrophysics Data System (ADS)

    Prundaru, Mihail; Prundaru, Ioana

    2000-12-01

    The paper presents the grammar inference engine of a pattern recognition system for character recognition. The input characters are identified, thinned to a one pixel width pattern and a feature-based description is provided. Using the syntactic recognition paradigm, the features are the set of terminals (or terminal symbols) for the application. The feature-based description includes a set of three attributes (i.e. A, B, C) for each feature. The combined feature and attribute description for each input pattern preserves in a more accurate way the structure of the original pattern. The grammar inference engine uses the feature-based description of each input pattern from the training set to build a grammar for each class of patterns. For each input pattern from the training set, the productions (rewriting rules) are derived together with all the necessary elements such as: the nonterminals, branch and testing conditions. Since the grammars are regular, the process of deriving the production rules is simple. All the productions are collected together providing the tags to be consecutive, without gaps. The size of the class grammars is reduced at an acceptable level for further processing using a set of Evans heuristic rules. These algorithms identifies the redundant productions, eliminating those productions and the correspondent nonterminal symbols. The stop criteria for the Evans thinning algorithm makes sure that no further reductions are possible. The last step of the grammar inference process enables the grammar to identify class members which were not in the training set: a cycling production rule. The above built grammars are used by the syntactic (character) classifier to identify the input patterns as being members of a-priori known classes.

  4. Joint regularization for spectro-temporal CT reconstruction

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Badea, C. T.

    2016-03-01

    X-ray CT is widely used, both clinically and preclinically, for fast, high-resolution, anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. In previous work, we proposed and demonstrated a projection acquisition and reconstruction strategy for 5D CT (3D + dual-energy + time) which recovered spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. The approach relied on the approximate separability of the temporal and spectral reconstruction sub-problems, which enabled substantial projection undersampling and effective regularization. Here, we extend this previous work to more general, nonseparable 5D CT reconstruction cases (3D + muti-energy + time) with applicability to K-edge imaging of exogenous contrast agents. We apply the newly proposed algorithm in phantom simulations using a realistic system and noise model for a photon counting x-ray detector with six energy thresholds. The MOBY mouse phantom used contains realistic concentrations of iodine, gold, and calcium in water. Relative to weighted least-squares reconstruction, the proposed 5D reconstruction algorithm improved reconstruction and material decomposition accuracy by 3-18 times. Furthermore, by exploiting joint, low rank image structure between time points and energies, ~80 HU of contrast associated with the Kedge of gold and ~35 HU of contrast associated with the blood pool and myocardium were recovered from more than 400 HU of noise.

  5. Density estimators in particle hydrodynamics. DTFE versus regular SPH

    NASA Astrophysics Data System (ADS)

    Pelupessy, F. I.; Schaap, W. E.; van de Weygaert, R.

    2003-05-01

    We present the results of a study comparing density maps reconstructed by the Delaunay Tessellation Field Estimator (DTFE) and by regular SPH kernel-based techniques. The density maps are constructed from the outcome of an SPH particle hydrodynamics simulation of a multiphase interstellar medium. The comparison between the two methods clearly demonstrates the superior performance of the DTFE with respect to conventional SPH methods, in particular at locations where SPH appears to fail. Filamentary and sheetlike structures form telling examples. The DTFE is a fully self-adaptive technique for reconstructing continuous density fields from discrete particle distributions, and is based upon the corresponding Delaunay tessellation. Its principal asset is its complete independence of arbitrary smoothing functions and parameters specifying the properties of these. As a result it manages to faithfully reproduce the anisotropies of the local particle distribution and through its adaptive and local nature proves to be optimally suited for uncovering the full structural richness in the density distribution. Through the improvement in local density estimates, calculations invoking the DTFE will yield a much better representation of physical processes which depend on density. This will be crucial in the case of feedback processes, which play a major role in galaxy and star formation. The presented results form an encouraging step towards the application and insertion of the DTFE in astrophysical hydrocodes. We describe an outline for the construction of a particle hydrodynamics code in which the DTFE replaces kernel-based methods. Further discussion addresses the issue and possibilities for a moving grid-based hydrocode invoking the DTFE, and Delaunay tessellations, in an attempt to combine the virtues of the Eulerian and Lagrangian approaches.

  6. Deconvolution methods based on φHL regularization for spectral recovery.

    PubMed

    Zhu, Hu; Deng, Lizhen; Bai, Xiaodong; Li, Meng; Cheng, Zhao

    2015-05-10

    The recorded spectra often suffer noise and band overlapping, and deconvolution methods are always used for spectral recovery. However, during the process of spectral recovery, the details cannot always be preserved. To solve this problem, two regularization terms are introduced and proposed. First, the conditions on the regularization term are analyzed for smoothing noise and preserving detail, and according to these conditions, φHL regularization is introduced into the spectral deconvolution model. In view of the deficiency of φHL under noisy condition, adaptive φHL regularization (φAHL) is proposed. Then semi-blind deconvolution methods based on φHL regularization (SBD-HL) and based on adaptive φHL regularization (SBD-AHL) are proposed, respectively. The simulation experimental results indicate that the proposed SBD-HL and SBD-AHL methods have better recovery, and SBD-AHL is superior to SBD-HL, especially in the noisy case.

  7. Ensemble regularized linear discriminant analysis classifier for P300-based brain-computer interface.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2013-01-01

    This paper demonstrates a better classification performance of an ensemble classifier using a regularized linear discriminant analysis (LDA) for P300-based brain-computer interface (BCI). The ensemble classifier with an LDA is sensitive to the lack of training data because covariance matrices are estimated imprecisely. One of the solution against the lack of training data is to employ a regularized LDA. Thus we employed the regularized LDA for the ensemble classifier of the P300-based BCI. The principal component analysis (PCA) was used for the dimension reduction. As a result, an ensemble regularized LDA classifier showed significantly better classification performance than an ensemble un-regularized LDA classifier. Therefore the proposed ensemble regularized LDA classifier is robust against the lack of training data.

  8. An Anisotropic Partial Regularity Criterion for the Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Kukavica, Igor; Rusin, Walter; Ziane, Mohammed

    2016-07-01

    In this paper, we address the partial regularity of suitable weak solutions of the incompressible Navier-Stokes equations. We prove an interior regularity criterion involving only one component of the velocity. Namely, if (u, p) is a suitable weak solution and a certain scale-invariant quantity involving only u 3 is small on a space-time cylinder {{Qr^{*}}(x_0,t_0)} , then u is regular at (x 0, t 0).

  9. Generalization of Levi-Civita regularization in the restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Roman, R.; Szücs-Csillik, I.

    2014-01-01

    A family of polynomial coupled function of n degree is proposed, in order to generalize the Levi-Civita regularization method, in the restricted three-body problem. Analytical relationship between polar radii in the physical plane and in the regularized plane are established; similar for polar angles. As a numerical application, trajectories of the test particle using polynomial functions of 2,3,…,8 degree are obtained. For the polynomial of second degree, the Levi-Civita regularization method is found.

  10. Regularity of Spike Trains and Harmony Perception in a Model of the Auditory System

    NASA Astrophysics Data System (ADS)

    Ushakov, Yu. V.; Dubkov, A. A.; Spagnolo, B.

    2011-09-01

    Spike train regularity of the noisy neural auditory system model under the influence of two sinusoidal signals with different frequencies is investigated. For the increasing ratio m/n of the input signal frequencies (m, n are natural numbers) the linear growth of the regularity is found at the fixed difference (m-n). It is shown that the spike train regularity in the model is high for harmonious chords of input tones and low for dissonant ones.

  11. Neural circuits subserving the retrieval of stems and grammatical features in regular and irregular verbs.

    PubMed

    de Diego Balaguer, Ruth; Rodríguez-Fornells, Antoni; Rotte, Michael; Bahlmann, Jörg; Heinze, Hans-Jochen; Münte, Thomas F

    2006-11-01

    Many languages, including English and Spanish, feature regular (dance --> danced) and irregular (catch --> caught) inflectional systems. According to psycholinguistic theories, regular and irregular inflections are instantiated either by a single or by two specialized mechanisms. Those theories differ in their assumptions concerning the underlying information necessary for the processing of regular verbs. Whereas single mechanism accounts have stated an increased involvement of phonological processing for regular verbs, dual accounts emphasize the prominence of grammatical information. Using event-related functional magnetic resonance imaging, we sought to delineate the brain areas involved in the generation of complex verb forms in Spanish. This language has the advantage of isolating specific differences in the regular-irregular contrasts in terms of the number of stems associated with a verb while controlling for compositionality (regular and irregular verbs apply suffixes to be inflected). The present study showed that areas related to grammatical processing are active for both types of verbs (left opercular inferior frontal gyrus). In addition, major differences between regular and irregular verbs were also observed. Several areas of the prefrontal cortex were selectively active for irregular production, presumably reflecting their role in lexical retrieval (bilateral inferior frontal area and dorsolateral prefrontal cortex). Regular verbs, however, showed increased activation in areas related to grammatical processing (anterior superior temporal gyrus/insular cortex) and in the left hippocampus, the latter possibly related to a greater implication of the phonological loop necessary for the reutilization of the same stem shared across all forms in regular verbs.

  12. Condom Use and Intimacy among Tajik Male Migrants and their Regular Female Partners in Moscow

    PubMed Central

    Polutnik, Chloe; Jonbekov, Jonbek; Shoakova, Farzona; Bahromov, Mahbat; Weine, Stevan

    2014-01-01

    This study examined condom use and intimacy among Tajik male migrants and their regular female partners in Moscow, Russia. This study included a survey of 400 Tajik male labour migrants; and longitudinal ethnographic interviews with 30 of the surveyed male migrants and 30 of their regular female partners. 351 (88%) of the surveyed male migrants reported having a regular female partner in Moscow. Findings demonstrated that the migrants’ and regular partners’ intentions to use condoms diminished with increased intimacy, yet each party perceived intimacy differently. Migrants’ intimacy with regular partners was determined by their familiarity and perceived sexual cleanliness of their partner. Migrants believed that Muslim women were cleaner than Orthodox Christian women and reported using condoms more frequently with Orthodox Christian regular partners. Regular partners reported determining intimacy based on the perceived commitment of the male migrant. When perceived commitment faced a crisis, intimacy declined, and regular partners renegotiated condom use. The association between intimacy and condom use suggests that HIV prevention programmes should aim to help male migrants and female regular partners to dissociate their approaches to condom use from their perceptions of intimacy. PMID:25033817

  13. 20 CFR 220.13 - Establishment of permanent disability for work in regular railroad occupation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... consistent, by reviewing the employee's medical history, physical and mental examination findings, laboratory... the physical requirements and environmental factors relating to the employee's regular...

  14. Regularity and chaos at critical points of first-order quantum phase transitions

    NASA Astrophysics Data System (ADS)

    Macek, M.; Leviatan, A.

    2011-10-01

    We study the interplay between regular and chaotic dynamics at the critical point of a generic first-order quantum phase transition in an interacting boson model of nuclei. A classical analysis reveals a distinct behavior of the coexisting phases in a broad energy range. The dynamics is completely regular in the deformed phase and, simultaneously, strongly chaotic in the spherical phase. A quantum analysis of the spectra separates the regular states from the irregular ones, assigns them to particular phases, and discloses persisting regular rotational bands in the deformed region.

  15. On a regularity criterion for the Navier-Stokes equations involving gradient of one velocity component

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Pokorný, Milan

    2009-12-01

    We improve the regularity criterion for the incompressible Navier-Stokes equations in the full three-dimensional space involving the gradient of one velocity component. The method is based on recent results of Cao and Titi [see "Regularity criteria for the three dimensional Navier-Stokes equations," Indiana Univ. Math. J. 57, 2643 (2008)] and Kukavica and Ziane [see "Navier-Stokes equations with regularity in one direction," J. Math. Phys. 48, 065203 (2007)]. In particular, for s ɛ[2,3], we get that the solution is regular if ∇u3ɛLt(0,T;Ls(R3)), 2/t+3/s≤23/12.

  16. Condom use and intimacy among Tajik male migrants and their regular female partners in Moscow.

    PubMed

    Zabrocki, Christopher; Polutnik, Chloe; Jonbekov, Jonbek; Shoakova, Farzona; Bahromov, Mahbat; Weine, Stevan

    2015-01-01

    This study examined condom use and intimacy among Tajik male migrants and their regular female partners in Moscow, Russia. This study included a survey of 400 Tajik male labour migrants and longitudinal ethnographic interviews with 30 of the surveyed male migrants and 30 of their regular female partners. of the surveyed male migrants, 351 (88%) reported having a regular female partner in Moscow. Findings demonstrated that the migrants' and regular partners' intentions to use condoms diminished with increased intimacy, yet each party perceived intimacy differently. Migrants' intimacy with regular partners was determined by their familiarity and the perceived sexual cleanliness of their partner. Migrants believed that Muslim women were cleaner than Orthodox Christian women and reported using condoms more frequently with Orthodox Christian regular partners. Regular partners reported determining intimacy based on the perceived commitment of the male migrant. When perceived commitment faced a crisis, intimacy declined and regular partners renegotiated condom use. The association between intimacy and condom use suggests that HIV-prevention programmes should aim to help male migrants and female regular partners to dissociate their approaches to condom use from their perceptions of intimacy.

  17. Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces

  18. Increased Hepato-Splanchnic Vasoconstriction in Diabetics during Regular Hemodialysis

    PubMed Central

    Ribitsch, Werner; Schneditz, Daniel; Franssen, Casper F. M.; Schilcher, Gernot; Stadlbauer, Vanessa; Horina, Jörg H.; Rosenkranz, Alexander R.

    2015-01-01

    Background and Objectives Ultrafiltration (UF) of excess fluid activates numerous compensatory mechanisms during hemodialysis (HD). The increase of both total peripheral and splanchnic vascular resistance is considered essential in maintaining hemodynamic stability. The aim of this study was to evaluate the extent of UF-induced changes in hepato-splanchnic blood flow and resistance in a group of maintenance HD patients during regular dialysis. Design, Setting, Participants, & Measurements Hepato-splanchnic flow resistance index (RI) and hepato-splanchnic perfusion index (QI) were measured in 12 chronic HD patients using a modified, non-invasive Indocyaningreen (ICG) dilution method. During a midweek dialysis session we determined RI, QI, ICG disappearance rate (kICG), plasma volume (Vp), hematocrit (Hct), mean arterial blood pressure (MAP) and heart rate (HR) at four times in hourly intervals (t1 to t4). Dialysis settings were standardized and all patient studies were done in duplicate. Results In the whole study group mean UF volume was 1.86 ± 0.46 L, Vp dropped from 3.65 ± 0.77L at t1 to 3.40 ± 0.78L at t4, and all patients remained hemodynamically stable. In all patients RI significantly increased from 12.40 ± 4.21 mmHg∙s∙m2/mL at t1 to 14.94 ± 6.36 mmHg∙s∙m2/mL at t4 while QI significantly decreased from 0.61 ± 0.22 at t1 to 0.52 ± 0.20 L/min/m2 at t4, indicating active vasoconstriction. In diabetic subjects, however, RI was significantly larger than in non-diabetics at all time points. QI was lower in diabetic subjects. Conclusions In chronic HD-patients hepato-splanchnic blood flow substantially decreases during moderate UF as a result of an active splanchnic vasoconstriction. Our data indicate that diabetic HD-patients are particularly prone to splanchnic ischemia and might therefore have an increased risk for bacterial translocation, endotoxemia and systemic inflammation. PMID:26713734

  19. Life with quintuplets: transitioning GeMS into regular operations

    NASA Astrophysics Data System (ADS)

    Garrel, Vincent; Van Dam, Marcos A.; Neichel, Benoît; Vidal, Fabrice; Sivo, Gaetano; Marin, Eduardo; Montes, Vanessa; Serio, Andrew; Arriagada, Gustavo; Trujillo, Chadwick; Rambold, William N.; Gigoux, Pedro; Galvez, Ramon; Moreno, Cristian; Araujo Hauck, Constanza; Vucina Parga, Tomislav; Donahue, Jeff; Marchant, Claudio; Gausachs, Gaston; Collao, Fabian; Carrasco Damele, Eleazar R.; Pessev, Peter; Lopez, Ariel

    2014-08-01

    The Gemini Multi-conjugate adaptive optics System (GeMS) at the Gemini South telescope in Cerro Pachon is the first sodium Laser Guide Star (LGS) adaptive optics (AO) system with multiple guide stars. It uses five LGSs and two deformable mirrors (DMs) to measure and compensate for distortions induced by atmospheric turbulence. After its 2012 commissioning phase, it is now transitioning into regular operations. Although GeMS has unique scientific capabilities, it remains a challenging instrument to maintain, operate and upgrade. In this paper, we summarize the latest news and results. First, we describe the engineering work done this past year, mostly during our last instrument shutdown in 2013 austral winter, covering many subsystems: an erroneous reconjugation of the Laser guide star wavefront sensor, the correction of focus field distortion for the natural guide star wavefront sensor and engineering changes dealing with our laser and its beam transfer optics. We also describe our revamped software, developed to integrate the instrument into the Gemini operational model, and the new optimization procedures aiming to reduce GeMS time overheads. Significant software improvements were achieved on the acquisition of natural guide stars by our natural guide star wavefront sensor, on the automation of tip-tilt and higher-order loop optimization, and on the tomographic non-common path aberration compensation. We then go through the current operational scheme and present the plan for the next years. We offered 38 nights in our last semester. We review the current system efficiency in term of raw performance, completed programs and time overheads. We also present our current efforts to merge GeMS into the Gemini base facility project, where night operations are all reliably driven from our La Serena headquarter, without the need for any spotter. Finally we present the plan for the future upgrades, mostly dedicated toward improving the performance and reliability of the

  20. Prospective regularization design in prior-image-based reconstruction.

    PubMed

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-12-21

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in