Science.gov

Sample records for dimensionally regularized polyakov

  1. How the Polyakov loop and the regularization affect strangeness and restoration of symmetries at finite T

    SciTech Connect

    Ruivo, M. C.; Costa, P.; Sousa, C. A. de; Hansen, H.

    2010-08-05

    The effects of the Polyakov loop and of a regularization procedure that allows the presence of high momentum quark states at finite temperature is investigated within the Polyakov-loop extended Nambu-Jona-Lasinio model. The characteristic temperatures, as well as the behavior of observables that signal deconfinement and restoration of chiral and axial symmetries, are analyzed, paying special attention to the behavior of strangeness degrees of freedom. We observe that the cumulative effects of the Polyakov loop and of the regularization procedure contribute to a better description of the thermodynamics, as compared with lattice estimations. We find a faster partial restoration of chiral symmetry and the restoration of the axial symmetry appears as a natural consequence of the full recovering of the chiral symmetry that was dynamically broken. These results show the relevance of the effects of the interplay among the Polyakov loop dynamics, the high momentum quark sates and the restoration of the chiral and axial symmetries at finite temperature.

  2. Dimensional regularization in configuration space

    SciTech Connect

    Bollini, C.G. |; Giambiagi, J.J.

    1996-05-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}

  3. Weighted power counting and chiral dimensional regularization

    NASA Astrophysics Data System (ADS)

    Anselmi, Damiano

    2014-06-01

    We define a modified dimensional-regularization technique that overcomes several difficulties of the ordinary technique, and is specially designed to work efficiently in chiral and parity violating quantum field theories, in arbitrary dimensions greater than 2. When the dimension of spacetime is continued to complex values, spinors, vectors and tensors keep the components they have in the physical dimension; therefore, the γ matrices are the standard ones. Propagators are regularized with the help of evanescent higher-derivative kinetic terms, which are of the Majorana type in the case of chiral fermions. If the new terms are organized in a clever way, weighted power counting provides an efficient control on the renormalization of the theory, and allows us to show that the resulting chiral dimensional regularization is consistent to all orders. The new technique considerably simplifies the proofs of properties that hold to all orders, and makes them suitable to be generalized to wider classes of models. Typical examples are the renormalizability of chiral gauge theories and the Adler-Bardeen theorem. The difficulty of explicit computations, on the other hand, may increase.

  4. Dimensional regularization and dimensional reduction in the light cone

    SciTech Connect

    Qiu, J.

    2008-06-15

    We calculate all of the 2 to 2 scattering process in Yang-Mills theory in the light cone gauge, with the dimensional regulator as the UV regulator. The IR is regulated with a cutoff in q{sup +}. It supplements our earlier work, where a Lorentz noncovariant regulator was used, and the final results bear some problems in gauge fixing. Supersymmetry relations among various amplitudes are checked by using the light cone superfields.

  5. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  6. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  7. Matching effective chiral Lagrangians with dimensional and lattice regularizations

    NASA Astrophysics Data System (ADS)

    Niedermayer, F.; Weisz, P.

    2016-04-01

    We compute the free energy in the presence of a chemical potential coupled to a conserved charge in effective O( n) scalar field theory (without explicit symmetry breaking terms) to third order for asymmetric volumes in general d-dimensions, using dimensional (DR) and lattice regularizations. This yields relations between the 4-derivative couplings appearing in the effective actions for the two regularizations, which in turn allows us to translate results, e.g. the mass gap in a finite periodic box in d = 3 + 1 dimensions, from one regularization to the other. Consistency is found with a new direct computation of the mass gap using DR. For the case n = 4 , d = 4 the model is the low-energy effective theory of QCD with N f = 2 massless quarks. The results can thus be used to obtain estimates of low energy constants in the effective chiral Lagrangian from measurements of the low energy observables, including the low lying spectrum of N f = 2 QCD in the δ-regime using lattice simulations, as proposed by Peter Hasenfratz, or from the susceptibility corresponding to the chemical potential used.

  8. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2014-03-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter , to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  9. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2013-05-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space {{R}^3} and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter {α → 0}, to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  10. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D ‑ d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D ‑ 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  11. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead. PMID:25353924

  12. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    PubMed

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification. PMID:26520484

  13. On the Global Regularity of the Two-Dimensional Density Patch for Inhomogeneous Incompressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Liao, Xian; Zhang, Ping

    2016-06-01

    Regarding P.-L. Lions' open question in Oxford Lecture Series in Mathematics and its Applications, Vol. 3 (1996) concerning the propagation of regularity for the density patch, we establish the global existence of solutions to the two-dimensional inhomogeneous incompressible Navier-Stokes system with initial density given by {(1 - η){1}_{{Ω}0} + {1}_{{Ω}0c}} for some small enough constant {η} and some {W^{k+2,p}} domain {Ω0}, with initial vorticity belonging to {L1 \\cap Lp} and with appropriate tangential regularities. Furthermore, we prove that the regularity of the domain {Ω_0} is preserved by time evolution.

  14. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  15. A Regular Tetrahedron Formation Strategy for Swarm Robots in Three-Dimensional Environment

    NASA Astrophysics Data System (ADS)

    Ercan, M. Fikret; Li, Xiang; Liang, Ximing

    A decentralized control method, namely Regular Tetrahedron Formation (RTF), is presented for a swarm of simple robots operating in three-dimensional space. It is based on virtual spring mechanism and enables four neighboring robots to autonomously form a Regular Tetrahedron (RT) regardless of their initial positions. RTF method is applied to various sizes of swarms through a dynamic neighbor selection procedure. Each robot's behavior depends only on position of three dynamically selected neighbors. An obstacle avoidance model is also introduced. Final, algorithm is studied with computational experiments which demonstrated that it is effective.

  16. Dimensional regularization of the path integral in curved space on an infinite time interval

    NASA Astrophysics Data System (ADS)

    Bastianelli, F.; Corradini, O.; van Nieuwenhuizen, P.

    2000-09-01

    We use dimensional regularization to evaluate quantum mechanical path integrals in arbitrary curved spaces on an infinite time interval. We perform 3-loop calculations in Riemann normal coordinates, and 2-loop calculations in general coordinates. It is shown that one only needs a covariant two-loop counterterm (VDR=ℏ2/8R) to obtain the same results as obtained earlier in other regularization schemes. It is also shown that the mass term needed in order to avoid infrared divergences explicitly breaks general covariance in the final result.

  17. A New 2-Dimensional Millimeter Wave Radiation Imaging System Based on Finite Difference Regularization

    NASA Astrophysics Data System (ADS)

    Zhu, Lu; Liu, Yuanyuan; Chen, Suhua; Hu, Fei; Chen, Zhizhang (David)

    2015-04-01

    Synthetic aperture imaging radiometer (SAIR) has the potential to meet the spatial resolution requirement of passive millimeter remote sensing from space. A new two-dimensional (2-D) imaging radiometer at millimeter wave (MMW) band is described in this paper; it uses a one-dimensional (1-D) synthetic aperture digital radiometer (SADR) to obtain an image on one dimension and a rotary platform to provide a scan on the second dimension. Due to the ill-posed inverse problem of SADR, we proposed a new reconstruction algorithm based on Finite Difference (FD) regularization to improve brightness temperature images. Experimental results show that the proposed 2-D MMW radiometer can give the brightness temperature images of natural scenes and the FD regularization reconstruction algorithm is able to improve the quality of brightness temperature images.

  18. Regularization strategy for an inverse problem for a 1 + 1 dimensional wave equation

    NASA Astrophysics Data System (ADS)

    Korpela, Jussi; Lassas, Matti; Oksanen, Lauri

    2016-06-01

    An inverse boundary value problem for a 1 + 1 dimensional wave equation with a wave speed c(x) is considered. We give a regularization strategy for inverting the map { A } :c\\mapsto {{Λ }}, where Λ is the hyperbolic Neumann-to-Dirichlet map corresponding to the wave speed c. That is, we consider the case when we are given a perturbation of the Neumann-to-Dirichlet map \\tilde{{{Λ }}}={{Λ }}+{ E }, where { E } corresponds to the measurement errors, and reconstruct an approximative wave speed \\tilde{c}. We emphasize that \\tilde{{{Λ }}} may not be in the range of the map { A }. We show that the reconstructed wave speed \\tilde{c} satisfies \\parallel \\tilde{c}-c\\parallel ≤slant C\\parallel { E }{\\parallel }1/54. Our regularization strategy is based on a new formula to compute c from Λ.

  19. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  20. Random packing of regular polygons and star polygons on a flat two-dimensional surface.

    PubMed

    Cieśla, Michał; Barbasz, Jakub

    2014-08-01

    Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks. PMID:25215737

  1. Dimensional regularization of local singularities in the fourth post-Newtonian two-point-mass Hamiltonian

    NASA Astrophysics Data System (ADS)

    Jaranowski, Piotr; Schäfer, Gerhard

    2013-04-01

    The article delivers the only still unknown coefficient in the 4th post-Newtonian energy expression for binary point masses on circular orbits as a function of orbital angular frequency. Apart from a single coefficient, which is known solely numerically, all the coefficients are given as exact numbers. The shown Hamiltonian is presented in the center-of-mass frame and out of its 57 coefficients, 51 are given fully explicitly. Those coefficients are six coefficients more than previously achieved [P. Jaranowski and G. Schäfer, Phys. Rev. D 86, 061503(R) (2012)PRVDAQ1550-7998]. The local divergences in the point-mass model are uniquely controlled by the method of dimensional regularization. As an application, the last stable circular orbit is determined as a function of the symmetric-mass-ratio parameter.

  2. Application of double-dimensional regularization in a nonabelian gauge theory

    SciTech Connect

    Karnaukhov, S.N.

    1986-04-01

    Calculations of the polarization operator and vertex function in a nonabelian gauge theory are performed in second order of perturbation theory on the basis of the method of I. V. Tyutin (JETP Lett. 35, 428 (1982)). In this calculation the formal contribution of the ghosts disappears, but the expressions for the polarization operator and vertex function are modified in such a way that this leads to automatic allowance for the contribution of the ghosts. For the gauge-invariant ..beta..-function the answer coincides with the known expression, but for the polarization operator and vertex function the dependence on the gauge parameter differs from that in standard calculations. It is shown that the calculations can be performed in the framework of dimensional regularization with a special choice of gauge condition.

  3. Computational methodology to determine fluid related parameters of non regular three-dimensional scaffolds.

    PubMed

    Acosta Santamaría, Víctor Andrés; Malvè, M; Duizabo, A; Mena Tobar, A; Gallego Ferrer, G; García Aznar, J M; Doblaré, M; Ochoa, I

    2013-11-01

    The application of three-dimensional (3D) biomaterials to facilitate the adhesion, proliferation, and differentiation of cells has been widely studied for tissue engineering purposes. The fabrication methods used to improve the mechanical response of the scaffold produce complex and non regular structures. Apart from the mechanical aspect, the fluid behavior in the inner part of the scaffold should also be considered. Parameters such as permeability (k) or wall shear stress (WSS) are important aspects in the provision of nutrients, the removal of metabolic waste products or the mechanically-induced differentiation of cells attached in the trabecular network of the scaffolds. Experimental measurements of these parameters are not available in all labs. However, fluid parameters should be known prior to other types of experiments. The present work compares an experimental study with a computational fluid dynamics (CFD) methodology to determine the related fluid parameters (k and WSS) of complex non regular poly(L-lactic acid) scaffolds based only on the treatment of microphotographic images obtained with a microCT (μCT). The CFD analysis shows similar tendencies and results with low relative difference compared to those of the experimental study, for high flow rates. For low flow rates the accuracy of this prediction reduces. The correlation between the computational and experimental results validates the robustness of the proposed methodology. PMID:23807712

  4. Polyakov loop and correlator of Polyakov loops at next-to-next-to-leading order

    SciTech Connect

    Brambilla, Nora; Vairo, Antonio; Ghiglieri, Jacopo; Petreczky, Peter

    2010-10-01

    We study the Polyakov loop and the correlator of two Polyakov loops at finite temperature in the weak-coupling regime. We calculate the Polyakov loop at order g{sup 4}. The calculation of the correlator of two Polyakov loops is performed at distances shorter than the inverse of the temperature and for electric screening masses larger than the Coulomb potential. In this regime, it is accurate up to order g{sup 6}. We also evaluate the Polyakov-loop correlator in an effective field theory framework that takes advantage of the hierarchy of energy scales in the problem and makes explicit the bound-state dynamics. In the effective field theory framework, we show that the Polyakov-loop correlator is at leading order in the multipole expansion the sum of a color-singlet and a color-octet quark-antiquark correlator, which are gauge invariant, and compute the corresponding color-singlet and color-octet free energies.

  5. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  6. Renormalization of the Polyakov loop with gradient flow

    NASA Astrophysics Data System (ADS)

    Petreczky, P.; Schadler, H.-P.

    2015-11-01

    We use the gradient flow for the renormalization of the Polyakov loop in various representations. Using 2 +1 flavor QCD with highly improved staggered quarks and lattices with temporal extents of Nτ=6 , 8, 10 and 12 we calculate the renormalized Polyakov loop in many representations including fundamental, sextet, adjoint, decuplet, 15-plet, 24-plet and 27-plet. This approach allows for the calculations of the renormalized Polyakov loops over a large temperature range from T =116 MeV up to T =815 MeV , with small errors not only for the Polyakov loop in fundamental representation, but also for the Polyakov loops in higher representations. We compare our results with standard renormalization schemes and discuss the Casimir scaling of the Polyakov loops.

  7. Renormalization group treatment of polymer excluded volume by t'Hooft-Veltman-type dimensional regularization

    NASA Astrophysics Data System (ADS)

    Kholodenko, A. L.; Freed, Karl F.

    1983-06-01

    The chain conformation space renormalization group method is transformed into a representation where the t'Hooft-Veltman method of dimensional regularization can directly be applied to problems involving polymer excluded volume. This t'Hooft-Veltman-type representation enables a comparison to be made with other direct renormalization methods for polymer excluded volume. In contrast to the latter, the current method and the chain conformation one from which it is derived are not restricted to the asymptotic limit of very long chains and do not require the cumbersome use of insertions to calculate the relevant exponents. Furthermore, the theory emerges directly in polymer language from the traditional excluded volume perturbation expansion which provides the correct weight factors for the diagrams. Special attention is paid to the general diagrammatic structure of the theory and to the renormalization prescription in order that this prescription follows from considerations on measurable quantities. The theory is illustrated by calculation of the mean square end-to-end distance and second virial coefficient to second order including the full crossover dependence on the renormalized strength of the excluded volume interaction and on the chain length. A subsequent paper provides the generalization of the theory to the treatment of excluded volume effects in polyelectrolytes.

  8. Globally regular instability of 3-dimensional anti-de Sitter spacetime.

    PubMed

    Bizoń, Piotr; Jałmużna, Joanna

    2013-07-26

    We consider three-dimensional anti-de Sitter (AdS) gravity minimally coupled to a massless scalar field and study numerically the evolution of small smooth circularly symmetric perturbations of the AdS3 spacetime. As in higher dimensions, for a large class of perturbations, we observe a turbulent cascade of energy to high frequencies which entails instability of AdS3. However, in contrast to higher dimensions, the cascade cannot be terminated by black hole formation because small perturbations have energy below the black hole threshold. This situation appears to be challenging for the cosmic censor. Analyzing the energy spectrum of the cascade we determine the width ρ(t) of the analyticity strip of solutions in the complex spatial plane and argue by extrapolation that ρ(t) does not vanish in finite time. This provides evidence that the turbulence is too weak to produce a naked singularity and the solutions remain globally regular in time, in accordance with the cosmic censorship hypothesis. PMID:23931347

  9. Inhomogeneous Polyakov loop induced by inhomogeneous chiral condensates

    NASA Astrophysics Data System (ADS)

    Hayata, Tomoya; Yamamoto, Arata

    2015-05-01

    We study the spatial inhomogeneity of the Polyakov loop induced by inhomogeneous chiral condensates. We formulate an effective model of gluons on the background fields of chiral condensates, and perform its lattice simulation. On the background of inhomogeneous chiral condensates, the Polyakov loop exhibits an in-phase spatial oscillation with the chiral condensates. We also analyze the heavy quark potential and show that the inhomogeneous Polyakov loop indicates the inhomogeneous confinement of heavy quarks.

  10. Accelerated motion corrected three‐dimensional abdominal MRI using total variation regularized SENSE reconstruction

    PubMed Central

    Atkinson, David; Buerger, Christian; Schaeffter, Tobias; Prieto, Claudia

    2015-01-01

    Purpose Develop a nonrigid motion corrected reconstruction for highly accelerated free‐breathing three‐dimensional (3D) abdominal images without external sensors or additional scans. Methods The proposed method accelerates the acquisition by undersampling and performs motion correction directly in the reconstruction using a general matrix description of the acquisition. Data are acquired using a self‐gated 3D golden radial phase encoding trajectory, enabling a two stage reconstruction to estimate and then correct motion of the same data. In the first stage total variation regularized iterative SENSE is used to reconstruct highly undersampled respiratory resolved images. A nonrigid registration of these images is performed to estimate the complex motion in the abdomen. In the second stage, the estimated motion fields are incorporated in a general matrix reconstruction, which uses total variation regularization and incorporates k‐space data from multiple respiratory positions. The proposed approach was tested on nine healthy volunteers and compared against a standard gated reconstruction using measures of liver sharpness, gradient entropy, visual assessment of image sharpness and overall image quality by two experts. Results The proposed method achieves similar quality to the gated reconstruction with nonsignificant differences for liver sharpness (1.18 and 1.00, respectively), gradient entropy (1.00 and 1.00), visual score of image sharpness (2.22 and 2.44), and visual rank of image quality (3.33 and 3.39). An average reduction of the acquisition time from 102 s to 39 s could be achieved with the proposed method. Conclusion In vivo results demonstrate the feasibility of the proposed method showing similar image quality to the standard gated reconstruction while using data corresponding to a significantly reduced acquisition time. Magn Reson Med, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of

  11. Two-dimensional encoder with picometre resolution using lattice spacing on regular crystalline surface as standard

    NASA Astrophysics Data System (ADS)

    Aketagawa, Masato; Honda, Hiroshi; Ishige, Masashi; Patamaporn, Chaikool

    2007-02-01

    A two-dimensional (2D) encoder with picometre resolution using multi-tunnelling-probes scanning tunnelling microscope (MTP-STM) as detector units and a regular crystalline lattice as a reference is proposed. In experiments to demonstrate the method, a highly oriented pyrolytic graphite (HOPG) crystal is utilized as the reference. The MTP-STM heads, which are set upon a sample stage, observe multi-points which satisfy some relationship on the HOPG crystalline surface on the sample stage, and the relative 2D displacement between the MTP-STM heads and the sample stage can be determined from the multi-current signals of the multi-points. Two unit lattice vectors on the HOPG crystalline surface with length and intersection angle of 0.246 nm and 60°, respectively, are utilized as 2D displacement references. 2D displacement of the sample stage on which the HOPG crystal is placed can be calculated using the linear sum of the two unit lattice vectors, derived from a linear operation of the multi-current signals. Displacement interpolation less than the lattice spacing of the HOPG crystal can also be performed. To determine the linear sum of the two unit vectors as the 2D displacement, the multi-points to be observed with the MTP-STM must be properly positioned according to the 2D atomic structure of the HOPG crystal. In the experiments, the proposed method is compared with a capacitance sensor whose resolution is improved to approximately 0.1 nm by limiting the sensor's bandwidth to 300 Hz. In order to obtain suitable multi-current signals of the properly positioned multi-points in semi-real-time, lateral dither modulations are applied to the STM probes. The results show that the proposed method has the capability to measure 2D lateral displacements with a resolution on the order of 10 pm with a maximum measurement speed of 100 nm s-1 or more.

  12. Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis

    NASA Astrophysics Data System (ADS)

    Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.

    2005-03-01

    The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..

  13. Effect of the Gribov horizon on the Polyakov loop and vice versa

    NASA Astrophysics Data System (ADS)

    Canfora, F. E.; Dudal, D.; Justo, I. F.; Pais, P.; Rosa, L.; Vercauteren, D.

    2015-07-01

    We consider finite-temperature SU(2) gauge theory in the continuum formulation, which necessitates the choice of a gauge fixing. Choosing the Landau gauge, the existing gauge copies are taken into account by means of the Gribov-Zwanziger quantization scheme, which entails the introduction of a dynamical mass scale (Gribov mass) directly influencing the Green functions of the theory. Here, we determine simultaneously the Polyakov loop (vacuum expectation value) and Gribov mass in terms of temperature, by minimizing the vacuum energy w.r.t. the Polyakov-loop parameter and solving the Gribov gap equation. Inspired by the Casimir energy-style of computation, we illustrate the usage of Zeta function regularization in finite-temperature calculations. Our main result is that the Gribov mass directly feels the deconfinement transition, visible from a cusp occurring at the same temperature where the Polyakov loop becomes nonzero. In this exploratory work we mainly restrict ourselves to the original Gribov-Zwanziger quantization procedure in order to illustrate the approach and the potential direct link between the vacuum structure of the theory (dynamical mass scales) and (de)confinement. We also present a first look at the critical temperature obtained from the refined Gribov-Zwanziger approach. Finally, a particular problem for the pressure at low temperatures is reported.

  14. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    SciTech Connect

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou, “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].

  15. Memory-efficient iterative process on a two-dimensional first-order regular graph.

    PubMed

    Park, S C; Jeong, H

    2008-01-01

    We present a parallel and memory-efficient iterative algorithm based on 2D first-order regular graphs. For an M x N regular graph with L iterations, a carefully chosen computation order can reduce the memory resources from O(MN) to O(ML). This scheme can achieve a memory reduction of 4 to 27 times in typical computation-intensive problems such as stereo and motion. PMID:18157263

  16. Polyakov loop fluctuations in the Dirac eigenmode expansion

    NASA Astrophysics Data System (ADS)

    Doi, Takahiro M.; Redlich, Krzysztof; Sasaki, Chihiro; Suganuma, Hideo

    2015-11-01

    We investigate correlations of the Polyakov loop fluctuations with eigenmodes of the lattice Dirac operator. Their analytic relations are derived on the temporally odd-number size lattice with the normal nontwisted periodic boundary condition for the link variables. We find that the low-lying Dirac modes yield negligible contributions to the Polyakov loop fluctuations. This property is confirmed to be valid in confined and deconfined phases by numerical simulations in SU(3) quenched QCD. These results indicate that there is no direct, one-to-one correspondence between confinement and chiral symmetry breaking in QCD in the context of different properties of the Polyakov loop fluctuation ratios.

  17. Vertical profiles of microphysical particle properties derived from inversion with two-dimensional regularization of multiwavelength Raman lidar data: experiment.

    PubMed

    Müller, Detlef; Kolgotin, Alexei; Mattis, Ina; Petzold, Andreas; Stohl, Andreas

    2011-05-10

    Inversion with two-dimensional (2-D) regularization is a new methodology that can be used for the retrieval of profiles of microphysical properties, e.g., effective radius and complex refractive index of atmospheric particles from complete (or sections) of profiles of optical particle properties. The optical profiles are acquired with multiwavelength Raman lidar. Previous simulations with synthetic data have shown advantages in terms of retrieval accuracy compared to our so-called classical one-dimensional (1-D) regularization, which is a method mostly used in the European Aerosol Research Lidar Network (EARLINET). The 1-D regularization suffers from flaws such as retrieval accuracy, speed, and ability for error analysis. In this contribution, we test for the first time the performance of the new 2-D regularization algorithm on the basis of experimental data. We measured with lidar an aged biomass-burning plume over West/Central Europe. For comparison, we use particle in situ data taken in the smoke plume during research aircraft flights upwind of the lidar. We find good agreement for effective radius and volume, surface-area, and number concentrations. The retrieved complex refractive index on average is lower than what we find from the in situ observations. Accordingly, the single-scattering albedo that we obtain from the inversion is higher than what we obtain from the aircraft data. In view of the difficult measurement situation, i.e., the large spatial and temporal distances between aircraft and lidar measurements, this test of our new inversion methodology is satisfactory. PMID:21556108

  18. Polyakov loop and gluon quasiparticles in Yang-Mills thermodynamics

    NASA Astrophysics Data System (ADS)

    Ruggieri, M.; Alba, P.; Castorina, P.; Plumari, S.; Ratti, C.; Greco, V.

    2012-09-01

    We study the interpretation of lattice data about the thermodynamics of the deconfinement phase of SU(3) Yang-Mills theory, in terms of gluon quasiparticles propagating in a background of a Polyakov loop. A potential for the Polyakov loop, inspired by the strong coupling expansion of the QCD action, is introduced; the Polyakov loop is coupled to transverse gluon quasiparticles by means of a gaslike effective potential. This study is useful to identify the effective degrees of freedom propagating in the gluon medium above the critical temperature. A main general finding is that a dominant part of the phase transition dynamics is accounted for by the Polyakov loop dynamics; hence, the thermodynamics can be described without the need for diverging or exponentially increasing quasiparticle masses as T→Tc, at variance respect to standard quasiparticle models.

  19. Fast ultrasound beam prediction for linear and regular two-dimensional arrays.

    PubMed

    Hlawitschka, Mario; McGough, Robert J; Ferrara, Katherine W; Kruse, Dustin E

    2011-09-01

    Real-time beam predictions are highly desirable for the patient-specific computations required in ultrasound therapy guidance and treatment planning. To address the longstanding issue of the computational burden associated with calculating the acoustic field in large volumes, we use graphics processing unit (GPU) computing to accelerate the computation of monochromatic pressure fields for therapeutic ultrasound arrays. In our strategy, we start with acceleration of field computations for single rectangular pistons, and then we explore fast calculations for arrays of rectangular pistons. For single-piston calculations, we employ the fast near-field method (FNM) to accurately and efficiently estimate the complex near-field wave patterns for rectangular pistons in homogeneous media. The FNM is compared with the Rayleigh-Sommerfeld method (RSM) for the number of abscissas required in the respective numerical integrations to achieve 1%, 0.1%, and 0.01% accuracy in the field calculations. Next, algorithms are described for accelerated computation of beam patterns for two different ultrasound transducer arrays: regular 1-D linear arrays and regular 2-D linear arrays. For the array types considered, the algorithm is split into two parts: 1) the computation of the field from one piston, and 2) the computation of a piston-array beam pattern based on a pre-computed field from one piston. It is shown that the process of calculating an array beam pattern is equivalent to the convolution of the single-piston field with the complex weights associated with an array of pistons. Our results show that the algorithms for computing monochromatic fields from linear and regularly spaced arrays can benefit greatly from GPU computing hardware, exceeding the performance of an expensive CPU by more than 100 times using an inexpensive GPU board. For a single rectangular piston, the FNM method facilitates volumetric computations with 0.01% accuracy at rates better than 30 ns per field point

  20. Connecting Polyakov loops to the thermodynamics of SU(Nc) gauge theories using the gauge-string duality

    NASA Astrophysics Data System (ADS)

    Noronha, Jorge

    2010-02-01

    We show that in four-dimensional gauge theories dual to five-dimensional Einstein gravity coupled to a single scalar field in the bulk, the derivative of the single heavy quark free energy in the deconfined phase is dFQ(T)/dT˜-1/cs2(T), where cs(T) is the speed of sound. This general result provides a direct link between the softest point in the equation of state of strongly-coupled plasmas and the deconfinement phase transition described by the expectation value of the Polyakov loop. We give an explicit example of a gravity dual with black hole solutions that can reproduce the lattice results for the expectation value of the Polyakov loop and the thermodynamics of SU(3) Yang-Mills theory in the (nonperturbative) temperature range between Tc and 3Tc.

  1. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  2. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    NASA Astrophysics Data System (ADS)

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-09-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network.

  3. Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions

    NASA Astrophysics Data System (ADS)

    Leiderman, Karin; Olson, Sarah D.

    2016-02-01

    The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.

  4. Uniform Regularity and Vanishing Dissipation Limit for the Full Compressible Navier-Stokes System in Three Dimensional Bounded Domain

    NASA Astrophysics Data System (ADS)

    Wang, Yong

    2016-09-01

    In the present paper, we study the uniform regularity and vanishing dissipation limit for the full compressible Navier-Stokes system whose viscosity and heat conductivity are allowed to vanish at different orders. The problem is studied in a three dimensional bounded domain with Navier-slip type boundary conditions. It is shown that there exists a unique strong solution to the full compressible Navier-Stokes system with the boundary conditions in a finite time interval which is independent of the viscosity and heat conductivity. The solution is uniformly bounded in {W^{1,infty}} and is a conormal Sobolev space. Based on such uniform estimates, we prove the convergence of the solutions of the full compressible Navier-Stokes to the corresponding solutions of the full compressible Euler system in {L^infty(0,T; L^2)}, {L^infty(0,T; H1)} and {L^infty([0,T]×Ω)} with a rate of convergence.

  5. Visualizations of coherent center domains in local Polyakov loops

    SciTech Connect

    Stokes, Finn M. Kamleh, Waseem; Leinweber, Derek B.

    2014-09-15

    Quantum Chromodynamics exhibits a hadronic confined phase at low to moderate temperatures and, at a critical temperature T{sub C}, undergoes a transition to a deconfined phase known as the quark–gluon plasma. The nature of this deconfinement phase transition is probed through visualizations of the Polyakov loop, a gauge independent order parameter. We produce visualizations that provide novel insights into the structure and evolution of center clusters. Using the HMC algorithm the percolation during the deconfinement transition is observed. Using 3D rendering of the phase and magnitude of the Polyakov loop, the fractal structure and correlations are examined. The evolution of the center clusters as the gauge fields thermalize from below the critical temperature to above it are also exposed. We observe deconfinement proceeding through a competition for the dominance of a particular center phase. We use stout-link smearing to remove small-scale noise in order to observe the large-scale evolution of the center clusters. A correlation between the magnitude of the Polyakov loop and the proximity of its phase to one of the center phases of SU(3) is evident in the visualizations. - Highlights: • We produce visualizations of center clusters in Polyakov loops. • The evolution of center clusters with HMC simulation time is examined. • Visualizations provide novel insights into the percolation of center clusters. • The magnitude and phase of the Polyakov loop are studied. • A correlation between the magnitude and center phase proximity is evident.

  6. Regularization Method for Predicting an Ordinal Response Using Longitudinal High-dimensional Genomic Data

    PubMed Central

    Hou, Jiayi

    2015-01-01

    An ordinal scale is commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical methodology based on statistical inference, in particular, ordinal modeling has contributed to the analysis of data in which the response categories are ordered and the number of covariates (p) remains smaller than the sample size (n). With the emergence of genomic technologies being increasingly applied for more accurate diagnosis and prognosis, high-dimensional data where the number of covariates (p) is much larger than the number of samples (n), are generated. To meet the emerging needs, we introduce our proposed model which is a two-stage algorithm: Extend the Generalized Monotone Incremental Forward Stagewise (GMIFS) method to the cumulative logit ordinal model; and combine the GMIFS procedure with the classical mixed-effects model for classifying disease status in disease progression along with time. We demonstrate the efficiency and accuracy of the proposed models in classification using a time-course microarray dataset collected from the Inflammation and the Host Response to Injury study. PMID:25720102

  7. Simplicial pseudorandom lattice study of a three-dimensional Abelian gauge model, the regular lattice as an extremum of the action

    SciTech Connect

    Pertermann, D.; Ranft, J.

    1986-09-15

    We introduce a simplicial pseudorandom version of lattice gauge theory. In this formulation it is possible to interpolate continuously between a regular simplicial lattice and a pseudorandom lattice. Using this method we study a simple three-dimensional Abelian lattice gauge theory. Calculating average plaquette expectation values, we find an extremum of the action for our regular simplicial lattice. Such a behavior was found in analytical studies in one and two dimensions.

  8. Seiberg-Witten and 'Polyakov-like' Magnetic Bion Confinements are Continuously Connected

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2012-06-01

    We study four-dimensional N = 2 supersymmetric pure-gauge (Seiberg-Witten) theory and its N = 1 mass perturbation by using compactification on S{sup 1} x R{sup 3}. It is well known that on R{sup 4} (or at large S{sup 1} size L) the perturbed theory realizes confinement through monopole or dyon condensation. At small S{sup 1}, we demonstrate that confinement is induced by a generalization of Polyakov's three-dimensional instanton mechanism to a locally four-dimensional theory - the magnetic bion mechanism - which also applies to a large class of nonsupersymmetric theories. Using a large- vs. small-L Poisson duality, we show that the two mechanisms of confinement, previously thought to be distinct, are in fact continuously connected.

  9. RENORMALIZATION OF POLYAKOV LOOPS IN FUNDAMENTAL AND HIGHER REPRESENTATIONS

    SciTech Connect

    KACZMAREK,O.; GUPTA, S.; HUEBNER, K.

    2007-07-30

    We compare two renormalization procedures, one based on the short distance behavior of heavy quark-antiquark free energies and the other by using bare Polyakov loops at different temporal entent of the lattice and find that both prescriptions are equivalent, resulting in renormalization constants that depend on the bare coupling. Furthermore these renormalization constants show Casimir scaling for higher representations of the Polyakov loops. The analysis of Polyakov loops in different representations of the color SU(3) group indicates that a simple perturbative inspired relation in terms of the quadratic Casimir operator is realized to a good approximation at temperatures T{approx}>{Tc}, for renormalized as well as bare loops. In contrast to a vanishing Polyakov loop in representations with non-zero triality in the confined phase, the adjoint loops are small but non-zero even for temperatures below the critical one. The adjoint quark-antiquark pairs exhibit screening. This behavior can be related to the binding energy of glue-lump states.

  10. Polyakov loop, hadron resonance gas model and thermodynamics of QCD

    SciTech Connect

    Megías, E.; Arriola, E. Ruiz; Salcedo, L. L.

    2014-11-11

    We summarize recent results on the hadron resonance gas description of QCD. In particular, we apply this approach to describe the equation of state and the vacuum expectation value of the Polyakov loop in several representations. Ambiguities related to exactly which states should be included are discussed.

  11. The Polyakov relation for the sphere and higher genus surfaces

    NASA Astrophysics Data System (ADS)

    Menotti, Pietro

    2016-05-01

    The Polyakov relation, which in the sphere topology gives the changes of the Liouville action under the variation of the position of the sources, is also related in the case of higher genus to the dependence of the action on the moduli of the surface. We write and prove such a relation for genus 1 and for all hyperelliptic surfaces.

  12. Polyakov loop of antisymmetric representations as a quantum impurity model

    SciTech Connect

    Mueck, Wolfgang

    2011-03-15

    The Polyakov loop of an operator in the antisymmetric representation in N=4 supersymmetric Yang-Mills theory on spacial R{sup 3} is calculated, to leading order in 1/N and at large 't Hooft coupling, by solving the saddle point equations of the corresponding quantum impurity model. Agreement is found with previous results from the supergravity dual, which is given by a D5-brane in an asymptotically AdS{sub 5}xS{sup 5} black brane background. It is shown that the azimuth angle, at which the dual D5-brane wraps the S{sup 5}, is related to the spectral asymmetry angle in the spectral density associated with the Green's function of the impurity fermions. Much of the calculation also applies to the Polyakov loop on spacial S{sup 3} or H{sup 3}.

  13. Polyakov loop at next-to-next-to-leading order

    NASA Astrophysics Data System (ADS)

    Berwein, Matthias; Brambilla, Nora; Petreczky, Péter; Vairo, Antonio

    2016-02-01

    We calculate the next-to-next-to-leading correction to the expectation value of the Polyakov loop or equivalently to the free energy of a static charge. This correction is of order g5 . We show that up to this order the free energy of the static charge is proportional to the quadratic Casimir operator of the corresponding representation. We also compare our perturbative result with the most recent lattice results in SU(3) gauge theory.

  14. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and

  15. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    NASA Astrophysics Data System (ADS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T.; Cooper, Benjamin J.; Kuncic, Zdenka; Keall, Paul J.

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and did

  16. Phase diagram and nucleation in the Polyakov-loop-extended quark-meson truncation of QCD with the unquenched Polyakov-loop potential

    NASA Astrophysics Data System (ADS)

    Stiele, Rainer; Schaffner-Bielich, Jürgen

    2016-05-01

    The unquenching of the Polyakov-loop potential has been shown to be an important improvement for the description of the phase structure and thermodynamics of strongly interacting matter at zero quark chemical potentials with Polyakov-loop-extended chiral models. This work constitutes the first application of the quark backreaction on the Polyakov-loop potential at nonzero density. The observation is that it links the chiral and deconfinement phase transitions also at small temperatures and large quark chemical potentials. The build-up of the surface tension in the Polyakov-loop-extended quark-meson model is explored by investigating the two- and 2 +1 -flavor quark-meson model and analyzing the impact of the Polyakov-loop extension. In general, the order of magnitude of the surface tension is given by the chiral phase transition. The coupling of the chiral and deconfinement transitions with the unquenched Polyakov-loop potential leads to the fact that the Polyakov loop contributes at all temperatures.

  17. Scaling behavior of regularized bosonic strings

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Makeenko, Y.

    2016-03-01

    We implement a proper-time UV regularization of the Nambu-Goto string, introducing an independent metric tensor and the corresponding Lagrange multiplier, and treating them in the mean-field approximation justified for long strings and/or when the dimension of space-time is large. We compute the regularized determinant of the 2D Laplacian for the closed string winding around a compact dimension, obtaining in this way the effective action, whose minimization determines the energy of the string ground state in the mean-field approximation. We discuss the existence of two scaling limits when the cutoff is taken to infinity. One scaling limit reproduces the results obtained by the hypercubic regularization of the Nambu-Goto string as well as by the use of the dynamical triangulation regularization of the Polyakov string. The other scaling limit reproduces the results obtained by canonical quantization of the Nambu-Goto string.

  18. From chiral quark dynamics with Polyakov loop to the hadron resonance gas model

    SciTech Connect

    Arriola, E. R.; Salcedo, L. L.; Megias, E.

    2013-03-25

    Chiral quark models with Polyakov loop at finite temperature have been often used to describe the phase transition. We show how the transition to a hadron resonance gas is realized based on the quantum and local nature of the Polyakov loop.

  19. Fuzzy bags, Polyakov loop and gauge/string duality

    NASA Astrophysics Data System (ADS)

    Zuo, Fen

    2014-11-01

    Confinement in SU(N) gauge theory is due to the linear potential between colored objects. At short distances, the linear contribution could be considered as the quadratic correction to the leading Coulomb term. Recent lattice data show that such quadratic corrections also appear in the deconfined phase, in both the thermal quantities and the Polyakov loop. These contributions are studied systematically employing the gauge/string duality. "Confinement" in N = 4 SU(N) Super Yang-Mills (SYM) theory could be achieved kinematically when the theory is defined on a compact space manifold. In the large-N limit, deconfinement of N = 4 SYM on {{Bbb S}^3} at strong coupling is dual to the Hawking-Page phase transition in the global Anti-de Sitter spacetime. Meantime, all the thermal quantities and the Polyakov loop achieve significant quadratic contributions. Similar results can also be obtained at weak coupling. However, when confinement is induced dynamically through the local dilaton field in the gravity-dilaton system, these contributions can not be generated consistently. This is in accordance with the fact that there is no dimension-2 gauge-invariant operator in the boundary gauge theory. Based on these results, we suspect that quadratic corrections, and also confinement, should be due to global or non-local effects in the bulk spacetime.

  20. QCD at zero baryon density and the Polyakov loop paradox

    SciTech Connect

    Kratochvila, Slavo; Forcrand, Philippe de

    2006-06-01

    We compare the grand-canonical partition function at fixed chemical potential {mu} with the canonical partition function at fixed baryon number B, formally and by numerical simulations at {mu}=0 and B=0 with four flavors of staggered quarks. We verify that the free energy densities are equal in the thermodynamic limit, and show that they can be well described by the hadron resonance gas at TT{sub c}. Small differences between the two ensembles, for thermodynamic observables characterizing the deconfinement phase transition, vanish with increasing lattice size. These differences are solely caused by contributions of nonzero baryon density sectors, which are exponentially suppressed with increasing volume. The Polyakov loop shows a different behavior: for all temperatures and volumes, its expectation value is exactly zero in the canonical formulation, whereas it is always nonzero in the commonly used grand-canonical formulation. We clarify this paradoxical difference, and show that the nonvanishing Polyakov loop expectation value is due to contributions of nonzero triality states, which are not physical, because they give zero contribution to the partition function.

  1. Numerical corrections to the strong coupling effective Polyakov-line action for finite T Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Bergner, G.; Langelage, J.; Philipsen, O.

    2015-11-01

    We consider a three-dimensional effective theory of Polyakov lines derived previously from lattice Yang-Mills theory and QCD by means of a resummed strong coupling expansion. The effective theory is useful for investigations of the phase structure, with a sign problem mild enough to allow simulations also at finite density. In this work we present a numerical method to determine improved values for the effective couplings directly from correlators of 4d Yang-Mills theory. For values of the gauge coupling up to the vicinity of the phase transition, the dominant short range effective coupling are well described by their corresponding strong coupling series. We provide numerical results also for the longer range interactions, Polyakov lines in higher representations as well as four-point interactions, and discuss the growing significance of non-local contributions as the lattice gets finer. Within this approach the critical Yang-Mills coupling β c is reproduced to better than one percent from a one-coupling effective theory on N τ = 4 lattices while up to five couplings are needed on N τ = 8 for the same accuracy.

  2. Entropy-based viscous regularization for the multi-dimensional Euler equations in low-Mach and transonic flows

    SciTech Connect

    Marc O Delchini; Jean E. Ragusa; Ray A. Berry

    2015-07-01

    We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.

  3. Dual quark condensate in the Polyakov-loop extended Nambu-Jona-Lasinio model

    SciTech Connect

    Kashiwa, Kouji; Yahiro, Masanobu; Kouno, Hiroaki

    2009-12-01

    The dual quark condensate {sigma}{sup (n)} proposed recently as a new order parameter of the spontaneous breaking of the Z{sub 3} symmetry are evaluated by the Polyakov-loop extended Nambu-Jona-Lasinio (PNJL) model, where n are winding numbers. The Polyakov-loop extended Nambu-Jona-Lasinio model well reproduces lattice QCD data on {sigma}{sup (1)} measured very lately. The dual quark condensate {sigma}{sup (n)} at higher temperatures is sensitive to the strength of the vector-type four-quark interaction in the Polyakov-loop extended Nambu-Jona-Lasinio model and hence a good quantity to determine the strength.

  4. Regularity of the Rotation Number for the One-Dimensional Time-Continuous Schrödinger Equation

    NASA Astrophysics Data System (ADS)

    Amor, Sana Hadj

    2012-12-01

    Starting from results already obtained for quasi-periodic co-cycles in SL(2, R), we show that the rotation number of the one-dimensional time-continuous Schrödinger equation with Diophantine frequencies and a small analytic potential has the behavior of a 1/2-Hölder function. We give also a sub-exponential estimate of the length of the gaps which depends on its label given by the gap-labeling theorem.

  5. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    NASA Astrophysics Data System (ADS)

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  6. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  7. Regular and Chaotic Ray and Wave Mappings for Two and Three-Dimensional Systems with Applications to a Periodically Perturbed Waveguide.

    NASA Astrophysics Data System (ADS)

    Ratowsky, Ricky Paul

    We investigate quantum or wave dynamics for a system which is stochastic in the classical or eikonal (ray) limit. This system is a mapping which couples the standard mapping to an additional degree of freedom. We observe numerically, in most but not all cases, the asymptotic (in time) limitation of diffusion in the classically strongly chaotic regime, and the inhibition of Arnold diffusion when there exist KAM surfaces classically. We present explicitly the two-dimensional asymptotic localized distributions for each case, when they exist. The scaling of the characteristic widths of the localized distributions with coupling strength has been determined. A simple model accounts for the observed behavior in the limit of weak coupling, and we derive a scaling law for the diffusive time scale in the system. We explore some implications of the wave mapping for a class of optical or acoustical systems: a parallel plate waveguide or duct with a periodically perturbed boundary (a grating), and a lens waveguide with nonlinear focusing elements. We compute the ray trajectories of each system, using a Poincare surface of section to study the dynamics. Each system leads to a near-integrable ray Hamiltonian: the study the dynamics. Each system leads to a near-integrable ray Hamiltonian: the phase space splits into regions showing regular or chaotic behavior. The solutions to the scalar Helmholtz equation are found via a secular equation determining the eigenfrequencies. A wave mapping is derived for the system in the paraxial regime. We find that localization should occur, limiting the beam spread in both wavevector and configuration space. In addition, we consider the effect of retaining higher order terms in the paraxial expansion. Although we focus largely on the two dimensional case, we make some remarks concerning the four dimensional mapping for this system.

  8. Nontopological soliton in the Polyakov quark-meson model

    NASA Astrophysics Data System (ADS)

    Jin, Jinshuang; Mao, Hong

    2016-01-01

    Within a mean-field approximation, we study a nontopological soliton solution of the Polyakov quark-meson model in the presence of a fermionic vacuum term with two flavors at finite temperature and density. The profile of the effective potential exhibits a stable soliton solution below a critical temperature T ≤Tχc for both the crossover and the first-order phase transitions, and these solutions are calculated here with appropriate boundary conditions. However, it is found that only if T ≤Tdc is the energy of the soliton MN less than the energy of the three free constituent quarks 3 Mq . As T >Tdc , there is an instant delocalization phase transition from hadron matter to quark matter. The phase diagram together with the location of a critical end point has been obtained in the T and μ plane. We notice that two critical temperatures always satisfy Tdc≤Tχc . Finally, we present and compare the result of thermodynamic pressure at zero chemical potential with lattice data.

  9. Polyakov loop and the hadron resonance gas model.

    PubMed

    Megías, E; Arriola, E Ruiz; Salcedo, L L

    2012-10-12

    The Polyakov loop has been used repeatedly as an order parameter in the deconfinement phase transition in QCD. We argue that, in the confined phase, its expectation value can be represented in terms of hadronic states, similarly to the hadron resonance gas model for the pressure. Specifically, L(T)≈1/2[∑(α)g(α)e(-Δ(α)/T), where g(α) are the degeneracies and Δ(α) are the masses of hadrons with exactly one heavy quark (the mass of the heavy quark itself being subtracted). We show that this approximate sum rule gives a fair description of available lattice data with N(f)=2+1 for temperatures in the range 150 MeV

  10. Propagator, sewing rules, and vacuum amplitude for the Polyakov point particles with ghosts

    SciTech Connect

    Giannakis, I.; Ordonez, C.R.; Rubin, M.A.; Zucchini, R.

    1989-01-01

    The authors apply techniques developed for strings to the case of the spinless point particle. The Polyakov path integral with ghosts is used to obtain the propagator and one-loop vacuum amplitude. The propagator is shown to correspond to the Green's function for the BRST field theory in Siegel gauge. The reparametrization invariance of the Polyakov path integral is shown to lead automatically to the correct trace log result for the one-loop diagram, despite the fact that naive sewing of the ends of a propagator would give an incorrect answer. This type of failure of naive sewing is identical to that found in the string case. The present treatment provides, in the simplified context of the point particle, a pedagogical introduction to Polyakov path integral methods with and without ghosts.

  11. The Polyakov loop correlator at NNLO and singlet and octet correlators

    SciTech Connect

    Ghiglieri, Jacopo

    2011-05-23

    We present the complete next-to-next-to-leading-order calculation of the correlation function of two Polyakov loops for temperatures smaller than the inverse distance between the loops and larger than the Coulomb potential. We discuss the relationship of this correlator with the singlet and octet potentials which we obtain in an Effective Field Theory framework based on finite-temperature potential Non-Relativistic QCD, showing that the Polyakov loop correlator can be re-expressed, at the leading order in a multipole expansion, as a sum of singlet and octet contributions. We also revisit the calculation of the expectation value of the Polyakov loop at next-to-next-to-leading order.

  12. The effect of the Polyakov loop on the chiral phase transition

    NASA Astrophysics Data System (ADS)

    Markó, G.; Szép, Zs.

    2011-04-01

    The Polyakov loop is included in the S U(2)L × S U(2)R chiral quark-meson model by considering the propagation of the constituent quarks, coupled to the (σ, π) meson multiplet, on the homogeneous background of a temporal gauge field, diagonal in color space. The model is solved at finite temperature and quark baryon chemical potential both in the chiral limit and for the physical value of the pion mass by using an expansion in the number of flavors Nf. Keeping the fermion propagator at its tree-level, a resummation on the pion propagator is constructed which resums infinitely many orders in 1/Nf, where O(1/Nf) represents the order at which the fermions start to contribute in the pion propagator. The influence of the Polyakov loop on the tricritical or the critical point in the µq - T phase diagram is studied for various forms of the Polyakov loop potential.

  13. Constituent Quarks and Gluons, Polyakov loop and the Hadron Resonance Gas Model ***

    NASA Astrophysics Data System (ADS)

    Megías, E.; Ruiz Arriola, E.; Salcedo, L. L.

    2014-03-01

    Based on first principle QCD arguments, it has been argued in [1] that the vacuum expectation value of the Polyakov loop can be represented in the hadron resonance gas model. We study this within the Polyakov-constituent quark model by implementing the quantum and local nature of the Polyakov loop [2, 3]. The existence of exotic states in the spectrum is discussed. Presented by E. Megías at the International Nuclear Physics Conference INPC 2013, 2-7 June 2013, Firenze, Italy.Supported by Plan Nacional de Altas Energías (FPA2011-25948), DGI (FIS2011-24149), Junta de Andalucía grant FQM-225, Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), Spanish MINECO's Centro de Excelencia Severo Ochoa Program grant SEV-2012-0234, and the Juan de la Cierva Program.

  14. Polyakov-loop suppression of colored states in a quark-meson-diquark plasma

    NASA Astrophysics Data System (ADS)

    Blaschke, D.; Dubinin, A.; Buballa, M.

    2015-06-01

    A quark-meson-diquark plasma is considered within the Polyakov-loop extended Nambu-Jona-Lasinio model for dynamical chiral symmetry breaking and restoration in quark matter. Based on a generalized Beth-Uhlenbeck approach to mesons and diquarks we present the thermodynamics of this system including the Mott dissociation of mesons and diquarks at finite temperature. A striking result is the suppression of the diquark abundance below the chiral restoration temperature by the coupling to the Polyakov loop, because of their color degree of freedom. This is understood in close analogy to the suppression of quark distributions by the same mechanism. Mesons as color singlets are unaffected by the Polyakov-loop suppression. At temperatures above the chiral restoration mesons and diquarks are both suppressed due to the Mott effect, whereby the positive resonance contribution to the pressure is largely compensated by the negative scattering contribution in accordance with the Levinson theorem.

  15. Exploring the role of model parameters and regularization procedures in the thermodynamics of the PNJL model

    SciTech Connect

    Ruivo, M. C.; Costa, P.; Sousa, C. A. de; Hansen, H.

    2010-08-05

    The equation of state and the critical behavior around the critical end point are studied in the framework of the Polyakov-Nambu-Jona-Lasinio model. We prove that a convenient choice of the model parameters is crucial to get the correct description of isentropic trajectories. The physical relevance of the effects of the regularization procedure is insured by the agreement with general thermodynamic requirements. The results are compared with simple thermodynamic expectations and lattice data.

  16. Extensions and further applications of the nonlocal Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Hell, T.; Weise, W.; Kashiwa, K.

    2011-06-01

    The nonlocal Polyakov-loop-extended Nambu-Jona-Lasinio model is further improved by including momentum-dependent wave-function renormalization in the quark quasiparticle propagator. Both two- and three-flavor versions of this improved Polyakov-loop-extended Nambu-Jona-Lasinio model are discussed, the latter with inclusion of the (nonlocal) 't Hooft-Kobayashi-Maskawa determinant interaction in order to account for the axial U(1) anomaly. Thermodynamics and phases are investigated and compared with recent lattice-QCD results.

  17. Polyakov loop extended Nambu-Jona-Lasinio model with imaginary chemical potential

    SciTech Connect

    Sakai, Yuji; Kashiwa, Kouji; Yahiro, Masanobu; Kouno, Hiroaki

    2008-03-01

    The Polyakov loop extended Nambu-Jona-Lasinio (PNJL) model with imaginary chemical potential is studied. The model possesses the extended Z{sub 3} symmetry that QCD does. Quantities invariant under the extended Z{sub 3} symmetry, such as the partition function, the chiral condensate, and the modified Polyakov loop, have Roberge-Weiss periodicity. The phase diagram of confinement/deconfinement transition derived with the PNJL model is consistent with the Roberge-Weiss prediction on it and the results of lattice QCD. The phase diagram of chiral transition is also presented by the PNJL model.

  18. Polyakov loop extended Nambu Jona-Lasinio model with imaginary chemical potential

    NASA Astrophysics Data System (ADS)

    Sakai, Yuji; Kashiwa, Kouji; Kouno, Hiroaki; Yahiro, Masanobu

    2008-03-01

    The Polyakov loop extended Nambu Jona-Lasinio (PNJL) model with imaginary chemical potential is studied. The model possesses the extended Z3 symmetry that QCD does. Quantities invariant under the extended Z3 symmetry, such as the partition function, the chiral condensate, and the modified Polyakov loop, have Roberge-Weiss periodicity. The phase diagram of confinement/deconfinement transition derived with the PNJL model is consistent with the Roberge-Weiss prediction on it and the results of lattice QCD. The phase diagram of chiral transition is also presented by the PNJL model.

  19. Phase transition of strongly interacting matter with a chemical potential dependent Polyakov loop potential

    NASA Astrophysics Data System (ADS)

    Shao, Guo-yun; Tang, Zhan-duo; Di Toro, Massimo; Colonna, Maria; Gao, Xue-yan; Gao, Ning

    2016-07-01

    We construct a hadron-quark two-phase model based on the Walecka-quantum hadrodynamics and the improved Polyakov-Nambu-Jona-Lasinio (PNJL) model with an explicit chemical potential dependence of Polyakov loop potential (μ PNJL model). With respect to the original PNJL model, the confined-deconfined phase transition is largely affected at low temperature and large chemical potential. Using the two-phase model, we investigate the equilibrium transition between hadronic and quark matter at finite chemical potentials and temperatures. The numerical results show that the transition boundaries from nuclear to quark matter move towards smaller chemical potential (lower density) when the μ -dependent Polyakov loop potential is taken. In particular, for charge asymmetric matter, we compute the local asymmetry of u , d quarks in the hadron-quark coexisting phase, and analyze the isospin-relevant observables possibly measurable in heavy-ion collision (HIC) experiments. In general new HIC data on the location and properties of the mixed phase would bring relevant information on the expected chemical potential dependence of the Polyakov loop contribution.

  20. Rotations of the Regular Polyhedra

    ERIC Educational Resources Information Center

    Jones, MaryClara; Soto-Johnson, Hortensia

    2006-01-01

    The study of the rotational symmetries of the regular polyhedra is important in the classroom for many reasons. Besides giving the students an opportunity to visualize in three dimensions, it is also an opportunity to relate two-dimensional and three-dimensional concepts. For example, rotations in R[superscript 2] require a point and an angle of…

  1. Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.

    PubMed

    Andreev, Oleg

    2009-05-29

    We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range. PMID:19519096

  2. Geometrical interpretation of the Knizhnik-Polyakov-Zamolodchikov exponents

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Anagnostopoulos, K. N.; Magnea, U.; Thorleifsson, G.

    1996-02-01

    We provide evidence that the KPZ exponents in two-dimensional quantum gravity can be interpreted as scaling exponents of correlation functions which are functions of the invariant geodesic distance between the fields.

  3. Thermodynamics of a three-flavor nonlocal Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.

    2010-04-01

    The present work generalizes a nonlocal version of the Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL) model to the case of three active quark flavors, with inclusion of the axial U(1) anomaly. Gluon dynamics is incorporated through a gluonic background field, expressed in terms of the Polyakov loop. The thermodynamics of the nonlocal PNJL model accounts for both chiral and deconfinement transitions. Our results obtained in mean-field approximation are compared to lattice QCD results for N{sub f}=2+1 quark flavors. Additional pionic and kaonic contributions to the pressure are calculated in random phase approximation. Finally, this nonlocal three-flavor PNJL model is applied to the finite density region of the QCD phase diagram. It is confirmed that the existence and location of a critical point in this phase diagram depend sensitively on the strength of the axial U(1) breaking interaction.

  4. Hydrodynamics of the Polyakov line in SU(Nc) Yang-Mills

    NASA Astrophysics Data System (ADS)

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2016-02-01

    We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite Nc for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of Nc, and are consistent with the string model results at Nc = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out of equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z (Nc) bubble using a piece-wise sound wave is suggested.

  5. Crystalline ground states in Polyakov-loop extended Nambu-Jona-Lasinio models

    NASA Astrophysics Data System (ADS)

    Braun, Jens; Karbstein, Felix; Rechenberger, Stefan; Roscher, Dietrich

    2016-01-01

    Nambu-Jona-Lasinio-type models have been used extensively to study the dynamics of the theory of the strong interaction at finite temperature and quark chemical potential on a phenomenological level. In addition to these studies, which are often performed under the assumption that the ground state of the theory is homogeneous, searches for the existence of crystalline phases associated with inhomogeneous ground states have attracted a lot of interest in recent years. In this work, we study the Polyakov-loop extended Nambu-Jona-Lasinio model using two prominent parametrizations and find that the existence of a crystalline phase is stable against a variation of the parametrization of the underlying Polyakov loop potential.

  6. Average phase factor in the Polyakov-loop extended Nambu-Jona-Lasinio model

    SciTech Connect

    Sakai, Yuji; Sasaki, Takahiro; Yahiro, Masanobu; Kouno, Hiroaki

    2010-11-01

    The average phase factor } of the QCD determinant is evaluated at the finite quark chemical potential ({mu}{sub q}) with the two-flavor version of the Polyakov-loop extended Nambu-Jona-Lasinio model with the scalar-type eight-quark interaction. For {mu}{sub q} larger than half the pion mass m{sub {pi}} at vacuum, } is finite only when the Polyakov loop is larger than {approx}0.5, indicating that lattice QCD is feasible only in the deconfinement phase. A critical end point lies in the region of }=0. The scalar-type eight-quark interaction makes it shorter a relative distance of the critical end point to the boundary of the region. For {mu}{sub q}Polyakov-loop extended Nambu-Jona-Lasinio model with dynamical mesonic fluctuations can reproduce lattice QCD data below the critical temperature.

  7. Dilepton and photon production in the presence of a nontrivial Polyakov loop

    NASA Astrophysics Data System (ADS)

    Hidaka, Yoshimasa; Lin, Shu; Pisarski, Robert D.; Satow, Daisuke

    2015-10-01

    We calculate the production of dileptons and photons in the presence of a nontrivial Polyakov loop in QCD. This is applicable to the semi-Quark Gluon Plasma (QGP), at temperatures above but near the critical temperature for deconfinement. The Polyakov loop is small in the semi-QGP, and near unity in the perturbative QGP. Working to leading order in the coupling constant of QCD, we find that there is a mild enhancement, ˜ 20%, for dilepton production in the semi-QGP over that in the perturbative QGP. In contrast, we find that photon production is strongly suppressed in the semi-QGP, by about an order of magnitude, relative to the perturbative QGP. In the perturbative QGP photon production contains contributions from 2 → 2 scattering and collinear emission with the Landau-Pomeranchuk-Migdal (LPM) effect. In the semi-QGP we show that the two contributions are modified differently. The rate for 2 → 2 scattering is suppressed by a factor which depends upon the Polyakov loop. In contrast, in an SU( N ) gauge theory the collinear rate is suppressed by 1 /N , so that the LPM effect vanishes at N = ∞. To leading order in the semi-QGP at large N , we compute the rate from 2 → 2 scattering to the leading logarithmic order and the collinear rate to leading order.

  8. Correlation between conserved charges in Polyakov-Nambu-Jona-Lasinio model with multiquark interactions

    SciTech Connect

    Bhattacharyya, Abhijit; Deb, Paramita; Lahiri, Anirban; Ray, Rajarshi

    2011-01-01

    We present a study of correlations among conserved charges like baryon number, electric charge and strangeness in the framework of 2+1 flavor Polyakov loop extended Nambu-Jona-Lasinio model at vanishing chemical potentials, up to fourth order. Correlations up to second order have been measured in lattice QCD, which compares well with our estimates given the inherent difference in the pion masses in the two systems. Possible physical implications of these correlations and their importance in understanding the matter obtained in heavy-ion collisions are discussed. We also present a comparison of the results with the commonly used unbound effective potential in the quark sector of this model.

  9. Transport Code for Regular Triangular Geometry

    Energy Science and Technology Software Center (ESTSC)

    1993-06-09

    DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.

  10. Dynamics and thermodynamics of a nonlocal Polyakov--Nambu--Jona-Lasinio model with running coupling

    SciTech Connect

    Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.

    2009-01-01

    A nonlocal covariant extension of the two-flavor Nambu and Jona-Lasinio model is constructed, with built-in constraints from the running coupling of QCD at high-momentum and instanton physics at low-momentum scales. Chiral low-energy theorems and basic current algebra relations involving pion properties are shown to be reproduced. The momentum-dependent dynamical quark mass derived from this approach is in agreement with results from Dyson-Schwinger equations and lattice QCD. At finite temperature, inclusion of the Polyakov loop and its gauge invariant coupling to quarks reproduces the dynamical entanglement of the chiral and deconfinement crossover transitions as in the (local) Polyakov-loop-extended Nambu and Jona-Lasinio model, but now without the requirement of introducing an artificial momentum cutoff. Steps beyond the mean-field approximation are made including mesonic correlations through quark-antiquark ring summations. Various quantities of interest (pressure, energy density, speed of sound, etc.) are calculated and discussed in comparison with lattice QCD thermodynamics at zero chemical potential. The extension to finite quark chemical potential and the phase diagram in the (T,{mu})-plane are also discussed.

  11. Nonlocal Polyakov-Nambu-Jona-Lasinio model and imaginary chemical potential

    NASA Astrophysics Data System (ADS)

    Kashiwa, Kouji; Hell, Thomas; Weise, Wolfram

    2011-09-01

    With the aim of setting constraints for the modeling of the QCD phase diagram, the phase structure of the two-flavor Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL) model is investigated in the range of imaginary chemical potentials (μI) and compared with available Nf=2 lattice QCD results. The calculations are performed using the advanced nonlocal version of the PNJL model with the inclusion of vector-type quasiparticle interactions between quarks, and with wave-function-renormalization corrections. It is demonstrated that the nonlocal PNJL model reproduces important features of QCD at finite μI, such as the Roberge-Weiss (RW) periodicity and the RW transition. Chiral and deconfinement transition temperatures for Nf=2 turn out to coincide both at zero chemical potential and at finite μI. Detailed studies are performed concerning the RW endpoint and its neighborhood where a first-order transition occurs.

  12. Resonances and bound states of the 't Hooft-Polyakov monopole

    SciTech Connect

    Russell, K. M.; Schroers, B. J.

    2011-03-15

    We present a systematic approach to the linearized Yang-Mills-Higgs equations in the background of a 't Hooft-Polyakov monopole and use it to unify and extend previous studies of their spectral properties. We show that a quaternionic formulation allows for a compact and efficient treatment of the linearized equations in the Bogomol'nyi-Prasad-Sommerfield limit of vanishing Higgs self-coupling and use it to study both scattering and bound states. We focus on the sector of vanishing generalized angular momentum and analyze it numerically, putting zero-energy bound states, Coulomb bound states, and infinitely many Feshbach resonances into a coherent picture. We also consider the linearized Yang-Mills-Higgs equations with nonvanishing Higgs self-coupling and confirm the occurrence of Feshbach resonances in this situation.

  13. Polyakov loop in 2 +1 flavor QCD from low to high temperatures

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Brambilla, N.; Ding, H.-T.; Petreczky, P.; Schadler, H.-P.; Vairo, A.; Weber, J. H.; Tumqcd Collaboration

    2016-06-01

    We study the free energy of a static quark in QCD with 2 +1 flavors in a wide temperature region, 116 MeV Polyakov loop susceptibilities using gradient flow. We discuss the implications of our findings for the deconfinement and chiral crossover phenomena at physical values of the quark masses. Finally a comparison of the lattice results at high temperatures with the weak-coupling calculations is presented.

  14. Operator regularization and quantum gravity

    NASA Astrophysics Data System (ADS)

    Mann, R. B.; Tarasov, L.; Mckeon, D. G. C.; Steele, T.

    1989-01-01

    Operator regularization has been shown to be a symmetry preserving means of computing Green functions in gauge symmetric and supersymmetric theories which avoids the explicit occurrence of divergences. In this paper we examine how this technique can be applied to computing quantities in non-renormalizable theories in general and quantum gravity in particular. Specifically, we consider various processes to one- and two-loop order in φ4N theory for N > 4 for which the theory is non-renormalizable. We then apply operator regularization to determine the one-loop graviton correction to the spinor propagator. The effective action for quantum scalars in a background gravitational field is evaluated in operator regularization using both the weak-field method and the normal coordinate expansion. This latter case yields a new derivation of the Schwinger-de Witt expansion which avoids the use of recursion relations. Finally we consider quantum gravity coupled to scalar fields in n dimensions, evaluating those parts of the effective action that (in other methods) diverge as n → 4. We recover the same divergence structure as is found using dimensional regularization if n ≠ 4, but if n = 4 at the outset no divergence arises at any stage of the calculation. The non-renormalizability of such theories manifests itself in the scale-dependence at one-loop order of terms that do not appear in the original lagrangian. In all cases our regularization procedure does not break any invariances present in the theory and avoids the occurence of explicit divergences.

  15. Continuum regularization of gauge theory with fermions

    SciTech Connect

    Chan, H.S.

    1987-03-01

    The continuum regularization program is discussed in the case of d-dimensional gauge theory coupled to fermions in an arbitrary representation. Two physically equivalent formulations are given. First, a Grassmann formulation is presented, which is based on the two-noise Langevin equations of Sakita, Ishikawa and Alfaro and Gavela. Second, a non-Grassmann formulation is obtained by regularized integration of the matter fields within the regularized Grassmann system. Explicit perturbation expansions are studied in both formulations, and considerable simplification is found in the integrated non-Grassmann formalism.

  16. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  17. The consequences of SU (3) colorsingletness, Polyakov Loop and Z (3) symmetry on a quark-gluon gas

    NASA Astrophysics Data System (ADS)

    Aminul Islam, Chowdhury; Abir, Raktim; Mustafa, Munshi G.; Ray, Rajarshi; Ghosh, Sanjay K.

    2014-02-01

    Based on quantum statistical mechanics, we show that the SU(3) color singlet ensemble of a quark-gluon gas exhibits a Z(3) symmetry through the normalized character in fundamental representation and also becomes equivalent, within a stationary point approximation, to the ensemble given by Polyakov Loop. In addition, a Polyakov Loop gauge potential is obtained by considering spatial gluons along with the invariant Haar measure at each space point. The probability of the normalized character in SU(3) vis-a-vis a Polyakov Loop is found to be maximum at a particular value, exhibiting a strong color correlation. This clearly indicates a transition from a color correlated to an uncorrelated phase, or vice versa. When quarks are included in the gauge fields, a metastable state appears in the temperature range 145 ⩽ T(MeV) ⩽ 170 due to the explicit Z(3) symmetry breaking in the quark-gluon system. Beyond T ⩾ 170 MeV, the metastable state disappears and stable domains appear. At low temperatures, a dynamical recombination of ionized Z(3) color charges to a color singlet Z(3) confined phase is evident, along with a confining background that originates due to the circulation of two virtual spatial gluons, but with conjugate Z(3) phases in a closed loop. We also discuss other possible consequences of the center domains in the color deconfined phase at high temperatures. Communicated by Steffen Bass

  18. Spinodal instabilities of baryon-rich quark-gluon plasma in the Polyakov-Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Li, Feng; Ko, Che Ming

    2016-03-01

    Using the Polyakov-Nambu-Jona-Lasinio model, we study the spinodal instability of a baryon-rich quark-gluon plasma in the linear response theory. We find that the spinodal unstable region in the temperature and density plane shrinks with increasing wave number of the unstable mode and is also reduced if the effect of the Polyakov loop is not included. In the small wave number or long wavelength limit, the spinodal boundaries in both cases of with and without the Polyakov loop coincide with those determined from the isothermal spinodal instability in the thermodynamic approach. Also, the vector interactions among quarks are found to suppress unstable modes of all wave numbers. Moreover, the growth rate of unstable modes initially increases with the wave number but is reduced when the wave number becomes large. Including the collisional effect from quark scattering via the linearized Boltzmann equation, we further find that it decreases the growth rate of unstable modes of all wave numbers. The relevance of these results to relativistic heavy ion collisions is discussed.

  19. Regular FPGA based on regular fabric

    NASA Astrophysics Data System (ADS)

    Xun, Chen; Jianwen, Zhu; Minxuan, Zhang

    2011-08-01

    In the sub-wavelength regime, design for manufacturability (DFM) becomes increasingly important for field programmable gate arrays (FPGAs). In this paper, an automated tile generation flow targeting micro-regular fabric is reported. Using a publicly accessible, well-documented academic FPGA as a case study, we found that compared to the tile generators previously reported, our generated micro-regular tile incurs less than 10% area overhead, which could be potentially recovered by process window optimization, thanks to its superior printability. In addition, we demonstrate that on 45 nm technology, the generated FPGA tile reduces lithography induced process variation by 33%, and reduce probability of failure by 21.2%. If a further overhead of 10% area can be recovered by enhanced resolution, we can achieve the variation reduction of 93.8% and reduce the probability of failure by 16.2%.

  20. Regular gravitational lagrangians

    NASA Astrophysics Data System (ADS)

    Dragon, Norbert

    1992-02-01

    The Einstein action with vanishing cosmological constant is for appropriate field content the unique local action which is regular at the fixed point of affine coordinate transformations. Imposing this regularity requirement one excludes also Wess-Zumino counterterms which trade gravitational anomalies for Lorentz anomalies. One has to expect dilatational and SL (D) anomalies. If these anomalies are absent and if the regularity of the quantum vertex functional can be controlled then Einstein gravity is renormalizable. On leave of absence from Institut für Theoretische Physik, Universität Hannover, W-3000 Hannover 1, FRG.

  1. Thermodynamics and quark susceptibilities: A Monte Carlo approach to the Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Cristoforetti, M.; Hell, T.; Klein, B.; Weise, W.

    2010-06-01

    The Monte-Carlo method is applied to the Polyakov-loop extended Nambu-Jona-Lasinio model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor nondiagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.

  2. Topological Symmetry, Spin Liquids and CFT Duals of Polyakov Model with Massless Fermions

    SciTech Connect

    Unsal, Mithat

    2008-04-30

    We prove the absence of a mass gap and confinement in the Polyakov model with massless complex fermions in any representation of the gauge group. A U(1){sub *} topological shift symmetry protects the masslessness of one dual photon. This symmetry emerges in the IR as a consequence of the Callias index theorem and abelian duality. For matter in the fundamental representation, the infrared limits of this class of theories interpolate between weakly and strongly coupled conformal field theory (CFT) depending on the number of flavors, and provide an infinite class of CFTs in d = 3 dimensions. The long distance physics of the model is same as certain stable spin liquids. Altering the topology of the adjoint Higgs field by turning it into a compact scalar does not change the long distance dynamics in perturbation theory, however, non-perturbative effects lead to a mass gap for the gauge fluctuations. This provides conceptual clarity to many subtle issues about compact QED{sub 3} discussed in the context of quantum magnets, spin liquids and phase fluctuation models in cuprate superconductors. These constructions also provide new insights into zero temperature gauge theory dynamics on R{sup 2,1} and R{sup 2,1} x S{sup 1}. The confined versus deconfined long distance dynamics is characterized by a discrete versus continuous topological symmetry.

  3. Meson properties at finite temperature in a three flavor nonlocal chiral quark model with Polyakov loop

    SciTech Connect

    Contrera, G. A.; Dumm, D. Gomez; Scoccola, Norberto N.

    2010-03-01

    We study the finite temperature behavior of light scalar and pseudoscalar meson properties in the context of a three-flavor nonlocal chiral quark model. The model includes mixing with active strangeness degrees of freedom, and takes care of the effect of gauge interactions by coupling the quarks with the Polyakov loop. We analyze the chiral restoration and deconfinement transitions, as well as the temperature dependence of meson masses, mixing angles and decay constants. The critical temperature is found to be T{sub c{approx_equal}}202 MeV, in better agreement with lattice results than the value recently obtained in the local SU(3) PNJL model. It is seen that above T{sub c} pseudoscalar meson masses get increased, becoming degenerate with the masses of their chiral partners. The temperatures at which this matching occurs depend on the strange quark composition of the corresponding mesons. The topological susceptibility shows a sharp decrease after the chiral transition, signalling the vanishing of the U(1){sub A} anomaly for large temperatures.

  4. An effective thermodynamic potential from the instanton vacuum with the Polyakov loop

    NASA Astrophysics Data System (ADS)

    Nam, Seung-Il

    2012-02-01

    In this talk, we report our recent studies on an effective thermodynamic potential (Ωeff) at finite temperature (T ≠ 0) and zero quark-chemical potential (μR = 0), using the singular-gauge instanton solution and Matsubara formula for Nc = 3 and Nf = 2 in the chiral limit, i.e. mq = 0. The momentum-dependent constituent-quark mass is computed as a function of T, together with the Harrington-Shepard caloron solution in the large-Nc limit. In addition, we take into account the imaginary quark-chemical potential μI ≡ A4, identified as the traced Polyakov-loop (Φ) as an order parameter for the ℤ(Nc) symmetry, characterizing the confinement (intact) and deconfinement (spontaneously broken) phases. As a consequence, we observe the crossover of the chiral (χ) order parameter σ2 and Φ. It also turns out that the critical temperature for the deconfinement phase transition, Tcℤ is lowered by about (5 ~10)% in comparison to the case with the constant constituent-quark mass. This behavior can be understood by considerable effects from the partial chiral restoration and nontrivial QCD vacuum on the Φ. Numerical results show that the crossover transitions occur at (Tcχ, Tcℤ) ≈ (216, 227) MeV.

  5. Vector meson spectral function and dilepton rate in the presence of strong entanglement effect between the chiral and the Polyakov loop dynamics

    NASA Astrophysics Data System (ADS)

    Islam, Chowdhury Aminul; Majumder, Sarbani; Mustafa, Munshi G.

    2015-11-01

    In this work we have reexplored our earlier study on the vector meson spectral function and its spectral property in the form of dilepton rate in a two-flavor Polyakov loop extended Nambu-Jona-Lasinio (PNJL) model in the presence of a strong entanglement between the chiral and Polyakov loop dynamics. The entanglement considered here is generated through the four-quark scalar-type interaction in which the coupling strength depends on the Polyakov loop and runs with temperature and chemical potential. The entanglement effect is also considered for the four-quark vector-type interaction in the same manner. We observe that the entanglement effect relatively enhances the color degrees of freedom due to the running of both the scalar and vector couplings. This modifies the vector meson spectral function and, thus, the spectral property such as the dilepton production rate in the low invariant mass also gets modified.

  6. Regularized Structural Equation Modeling

    PubMed Central

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  7. Nonlocal Polyakov-Nambu-Jona-Lasinio model with wave function renormalization at finite temperature and chemical potential

    SciTech Connect

    Contrera, G. A.; Orsaria, M.; Scoccola, N. N.

    2010-09-01

    We study the phase diagram of strongly interacting matter in the framework of a nonlocal SU(2) chiral quark model which includes wave function renormalization and coupling to the Polyakov loop. Both nonlocal interactions based on the frequently used exponential form factor, and on fits to the quark mass and renormalization functions obtained in lattice calculations are considered. Special attention is paid to the determination of the critical points, both in the chiral limit and at finite quark mass. In particular, we study the position of the critical end point as well as the value of the associated critical exponents for different model parametrizations.

  8. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  9. Krein regularization of QED

    NASA Astrophysics Data System (ADS)

    Forghan, B.; Takook, M. V.; Zarei, A.

    2012-09-01

    In this paper, the electron self-energy, photon self-energy and vertex functions are explicitly calculated in Krein space quantization including quantum metric fluctuation. The results are automatically regularized or finite. The magnetic anomaly and Lamb shift are also calculated in the one loop approximation in this method. Finally, the obtained results are compared to conventional QED results.

  10. Geometry of spinor regularization

    NASA Technical Reports Server (NTRS)

    Hestenes, D.; Lounesto, P.

    1983-01-01

    The Kustaanheimo theory of spinor regularization is given a new formulation in terms of geometric algebra. The Kustaanheimo-Stiefel matrix and its subsidiary condition are put in a spinor form directly related to the geometry of the orbit in physical space. A physically significant alternative to the KS subsidiary condition is discussed. Derivations are carried out without using coordinates.

  11. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system. PMID:25019866

  12. Gauge approach to gravitation and regular Big Bang theory

    NASA Astrophysics Data System (ADS)

    Minkevich, A. V.

    2006-03-01

    Field theoretical scheme of regular Big Bang in 4-dimensional physical space-time, built in the framework of gauge approach to gravitation, is discussed. Regular bouncing character of homogeneous isotropic cosmological models is ensured by gravitational repulsion effect at extreme conditions without quantum gravitational corrections. The most general properties of regular inflationary cosmological models are examined. Developing theory is valid, if energy density of gravitating matter is positive and energy dominance condition is fulfilled.

  13. 2+1 flavor Polyakov Nambu Jona-Lasinio model at finite temperature and nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Fu, Wei-Jie; Zhang, Zhao; Liu, Yu-Xin

    2008-01-01

    We extend the Polyakov-loop improved Nambu Jona-Lasinio model to 2+1 flavor case to study the chiral and deconfinement transitions of strongly interacting matter at finite temperature and nonzero chemical potential. The Polyakov loop, the chiral susceptibility of light quarks (u and d), and the strange quark number susceptibility as functions of temperature at zero chemical potential are determined and compared with the recent results of lattice QCD simulations. We find that there is always an inflection point in the curve of strange quark number susceptibility accompanying the appearance of the deconfinement phase, which is consistent with the result of lattice QCD simulations. Predictions for the case at nonzero chemical potential and finite temperature are made as well. We give the phase diagram in terms of the chemical potential and temperature and find that the critical end point moves down to low temperature and finally disappears with the decrease of the strength of the ’t Hooft flavor-mixing interaction.

  14. Perturbations in a regular bouncing universe

    SciTech Connect

    Battefeld, T.J.; Geshnizjani, G.

    2006-03-15

    We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.

  15. Phase diagram of baryon matter in the SU(2) Nambu – Jona-Lasinio model with a Polyakov loop

    NASA Astrophysics Data System (ADS)

    Kalinovsky, Yu L.; Toneev, V. D.; Friesen, A. V.

    2016-04-01

    The nature of phase transitions in hot and dense nuclear matter is discussed in the framework of the effective SU(2) Nambu – Jona-Lasinio model with a Polyakov loop with two quark flavor — one of a few models describing the properties of both chiral and confinement-deconfinement phase transitions. We consider the parameters of the model and examine additional interactions that influence the structure of the phase diagram and the positions of critical points in it. The effect of meson correlations on the thermodynamic properties of the quark-meson system is examined. The evolution of the model with changes in the understanding of the phase diagram structure is discussed.

  16. Dimensional Reduction and Hadronic Processes

    SciTech Connect

    Signer, Adrian; Stoeckinger, Dominik

    2008-11-23

    We consider the application of regularization by dimensional reduction to NLO corrections of hadronic processes. The general collinear singularity structure is discussed, the origin of the regularization-scheme dependence is identified and transition rules to other regularization schemes are derived.

  17. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part I: Formulation and one-dimensional characterization

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    The motion of microstructural interfaces is important in modeling twinning and structural phase transformations. Continuum models fall into two classes: sharp-interface models, where interfaces are singular surfaces; and regularized-interface models, such as phase-field models, where interfaces are smeared out. The former are challenging for numerical solutions because the interfaces need to be explicitly tracked, but have the advantage that the kinetics of existing interfaces and the nucleation of new interfaces can be transparently and precisely prescribed. In contrast, phase-field models do not require explicit tracking of interfaces, thereby enabling relatively simple numerical calculations, but the specification of kinetics and nucleation is both restrictive and extremely opaque. This prevents straightforward calibration of phase-field models to experiment and/or molecular simulations, and breaks the multiscale hierarchy of passing information from atomic to continuum. Consequently, phase-field models cannot be confidently used in dynamic settings. This shortcoming of existing phase-field models motivates our work. We present the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients are a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation - through the source term of the conservation law - and kinetics - through a distinct interfacial velocity field. A formal limit of the kinetic driving force recovers the classical continuum sharp-interface driving force, providing confidence in both the re-parametrized energy and the evolution statement. We present some 1D calculations characterizing the formulation; in a

  18. Convex nonnegative matrix factorization with manifold regularization.

    PubMed

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. PMID:25523040

  19. D. -->. -infinity saddle-point spectrum analysis of the open bosonic Polyakov string in R/sup D/SO(N)

    SciTech Connect

    Botelho, L.C.L.

    1987-02-15

    In this paper, we investigate the role of the chiral anomaly in determining the spectrum at the saddle-point approximation D..-->..-infinity of the recently considered Polyakov formulation of bosonic strings moving in R/sup D/ x G with K = 2, where G is the group manifold SO(N). The main result is, opposite to the critical dimension, that the spectrum is not sensitive to the model chiral anomaly in the D..-->..-infinity limit.

  20. Some results on the spectra of strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Vieira, Luís António de Almeida; Mano, Vasco Moço

    2016-06-01

    Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.

  1. 75 FR 53966 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm Credit Administration in...

  2. Regularly timed events amid chaos.

    PubMed

    Blakely, Jonathan N; Cooper, Roy M; Corron, Ned J

    2015-11-01

    We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events. PMID:26651759

  3. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  4. Natural selection and mechanistic regularity.

    PubMed

    DesAutels, Lane

    2016-06-01

    In this article, I address the question of whether natural selection operates regularly enough to qualify as a mechanism of the sort characterized by Machamer, Darden, and Craver (2000). Contrary to an influential critique by Skipper and Millstein (2005), I argue that natural selection can be seen to be regular enough to qualify as an MDC mechanism just fine-as long as we pay careful attention to some important distinctions regarding mechanistic regularity and abstraction. Specifically, I suggest that when we distinguish between process vs. product regularity, mechanism-internal vs. mechanism-external sources of irregularity, and abstract vs. concrete regularity, we can see that natural selection is only irregular in senses that are unthreatening to its status as an MDC mechanism. PMID:26921876

  5. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part II: Two-dimensional characterization and boundary kinetics

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    A companion paper presented the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients were a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation through the source term of the conservation law and of kinetics through an interfacial velocity field. This model overcomes an important shortcoming of existing phase-field models, namely that the specification of kinetics and nucleation is both restrictive and extremely opaque. In this paper, we present a number of numerical calculations - in one and two dimensions - that characterize our formulation. These calculations illustrate (i) highly-sensitive rate-dependent nucleation; (ii) independent prescription of the forward and backward nucleation stresses without changing the energy landscape; (iii) stick-slip interface kinetics; (iii) the competition between nucleation and kinetics in determining the final microstructural state; (iv) the effect of anisotropic kinetics; and (v) the effect of non-monotone kinetics. These calculations demonstrate the ability of this formulation to precisely prescribe complex nucleation and kinetics in a simple and transparent manner. We also extend our conservation statement to describe the kinetics of the junction lines between microstructural interfaces and boundaries. This enables us to prescribe an additional kinetic relation for the boundary, and we examine the interplay between the bulk kinetics and the junction kinetics.

  6. Laplacian Regularized Low-Rank Representation and Its Applications.

    PubMed

    Yin, Ming; Gao, Junbin; Lin, Zhouchen

    2016-03-01

    Low-rank representation (LRR) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. For a given set of observed data corrupted with sparse errors, LRR aims at learning a lowest-rank representation of all data jointly. LRR has broad applications in pattern recognition, computer vision and signal processing. In the real world, data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space. However, the LRR method does not take into account the non-linear geometric structures within data, thus the locality and similarity information among data may be missing in the learning process. To improve LRR in this regard, we propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR). By taking advantage of the graph regularizer, our proposed method not only can represent the global low-dimensional structures, but also capture the intrinsic non-linear geometric information in data. The extensive experimental results on image clustering, semi-supervised image classification and dimensionality reduction tasks demonstrate the effectiveness of the proposed method. PMID:27046494

  7. NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION

    SciTech Connect

    CHARTRAND, RICK

    2007-01-16

    The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.

  8. Geometric continuum regularization of quantum field theory

    SciTech Connect

    Halpern, M.B. . Dept. of Physics)

    1989-11-08

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.

  9. Regular patterns stabilize auditory streams.

    PubMed

    Bendixen, Alexandra; Denham, Susan L; Gyimesi, Kinga; Winkler, István

    2010-12-01

    The auditory system continuously parses the acoustic environment into auditory objects, usually representing separate sound sources. Sound sources typically show characteristic emission patterns. These regular temporal sound patterns are possible cues for distinguishing sound sources. The present study was designed to test whether regular patterns are used as cues for source distinction and to specify the role that detecting these regularities may play in the process of auditory stream segregation. Participants were presented with tone sequences, and they were asked to continuously indicate whether they perceived the tones in terms of a single coherent sequence of sounds (integrated) or as two concurrent sound streams (segregated). Unknown to the participant, in some stimulus conditions, regular patterns were present in one or both putative streams. In all stimulus conditions, participants' perception switched back and forth between the two sound organizations. Importantly, regular patterns occurring in either one or both streams prolonged the mean duration of two-stream percepts, whereas the duration of one-stream percepts was unaffected. These results suggest that temporal regularities are utilized in auditory scene analysis. It appears that the role of this cue lies in stabilizing streams once they have been formed on the basis of simpler acoustic cues. PMID:21218898

  10. Extended Locus of Regular Nuclei

    SciTech Connect

    Amon, L.; Casten, R. F.

    2007-04-23

    A new family of IBM Hamiltonians, characterized by certain parameter values, was found about 15 years ago by Alhassid and Whelan to display almost regular dynamics, and yet these solutions to the IBM do not belong to any of the known dynamical symmetry limits (vibrational, rotational and {gamma} - unstable). Rather, they comprise an 'Arc of Regularity' cutting through the interior of the symmetry triangle from U(5) to SU(3) where suddenly there is a decrease in chaoticity and a significant increase in regularity. A few years ago, the first set of nuclei lying along this arc was discovered. The purpose of the present work is to search more broadly in the nuclear chart at all nuclei from Z = 40 - 100 for other examples of such 'regular' nuclei. Using a unique signature for such nuclei involving energy differences of certain excited states, we have identified an additional set of 12 nuclei lying near or along the arc. Some of these nuclei are known to have low-lying intruder states and therefore care must be taken, however, in judging their structure. The regularity exhibited by nuclei near the arc presumably reflects the validity or partial validity of some new, as yet unknown, quantum number describing these systems and giving the regularity found for them.

  11. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art. PMID:26394426

  12. Regularization Analysis of SAR Superresolution

    SciTech Connect

    DELAURENTIS,JOHN M.; DICKEY,FRED M.

    2002-04-01

    Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. In a previous report the application of the concept to synthetic aperture radar was investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. This work treats the problem from the standpoint of regularization. Both the operator inversion approach and the regularization approach show that the ability to superresolve SAR imagery is severely limited by system noise.

  13. Features of the regular F2-layer

    NASA Astrophysics Data System (ADS)

    Besprozvannaia, A. S.

    1987-10-01

    Results of the empirical modeling of cyclic and seasonal variations of the daytime regular F2-layer are presented. It is shown that the formation of the seasonal anomaly in years of high solar activity is determined mainly by a summer anomaly. This summer anomaly is connected with an increase in the content of molecular nitrogen in the polar ionosphere during summer months due to additional heating and turbulent mixing in connection with intense dissipation of the three-dimensional current system under high-conductivity conditions. In solar-minimum years the seasonal anomaly is determined mainly by seasonal variations of the composition of the neutral atmosphere in the passage from winter to summer.

  14. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  15. Academic Improvement through Regular Assessment

    ERIC Educational Resources Information Center

    Wolf, Patrick J.

    2007-01-01

    Media reports are rife with claims that students in the United States are overtested and that they and their education are suffering as result. Here I argue the opposite--that students would benefit in numerous ways from more frequent assessment, especially of diagnostic testing. The regular assessment of students serves critical educational and…

  16. Temporal regularity in speech perception: Is regularity beneficial or deleterious?

    PubMed Central

    Geiser, Eveline; Shattuck-Hufnagel, Stefanie

    2012-01-01

    Speech rhythm has been proposed to be of crucial importance for correct speech perception and language learning. This study investigated the influence of speech rhythm in second language processing. German pseudo-sentences were presented to participants in two conditions: ‘naturally regular speech rhythm’ and an ‘emphasized regular rhythm'. Nine expert English speakers with 3.5±1.6 years of German training repeated each sentence after hearing it once over headphones. Responses were transcribed using the International Phonetic Alphabet and analyzed for the number of correct, false and missing consonants as well as for consonant additions. The over-all number of correct reproductions of consonants did not differ between the two experimental conditions. However, speech rhythmicization significantly affected the serial position curve of correctly reproduced syllables. The results of this pilot study are consistent with the view that speech rhythm is important for speech perception. PMID:22701753

  17. Distributional Stress Regularity: A Corpus Study

    ERIC Educational Resources Information Center

    Temperley, David

    2009-01-01

    The regularity of stress patterns in a language depends on "distributional stress regularity", which arises from the pattern of stressed and unstressed syllables, and "durational stress regularity", which arises from the timing of syllables. Here we focus on distributional regularity, which depends on three factors. "Lexical stress patterning"…

  18. Grouping pursuit through a regularization solution surface *

    PubMed Central

    Shen, Xiaotong; Huang, Hsin-Cheng

    2010-01-01

    Summary Extracting grouping structure or identifying homogenous subgroups of predictors in regression is crucial for high-dimensional data analysis. A low-dimensional structure in particular–grouping, when captured in a regression model, enables to enhance predictive performance and to facilitate a model's interpretability Grouping pursuit extracts homogenous subgroups of predictors most responsible for outcomes of a response. This is the case in gene network analysis, where grouping reveals gene functionalities with regard to progression of a disease. To address challenges in grouping pursuit, we introduce a novel homotopy method for computing an entire solution surface through regularization involving a piecewise linear penalty. This nonconvex and overcomplete penalty permits adaptive grouping and nearly unbiased estimation, which is treated with a novel concept of grouped subdifferentials and difference convex programming for efficient computation. Finally, the proposed method not only achieves high performance as suggested by numerical analysis, but also has the desired optimality with regard to grouping pursuit and prediction as showed by our theoretical results. PMID:20689721

  19. Adaptive regularization of earthquake slip distribution inversion

    NASA Astrophysics Data System (ADS)

    Wang, Chisheng; Ding, Xiaoli; Li, Qingquan; Shan, Xinjian; Zhu, Jiasong; Guo, Bo; Liu, Peng

    2016-04-01

    Regularization is a routine approach used in earthquake slip distribution inversion to avoid numerically abnormal solutions. To date, most slip inversion studies have imposed uniform regularization on all the fault patches. However, adaptive regularization, where each retrieved parameter is regularized differently, has exhibited better performances in other research fields such as image restoration. In this paper, we implement an investigation into adaptive regularization for earthquake slip distribution inversion. It is found that adaptive regularization can achieve a significantly smaller mean square error (MSE) than uniform regularization, if it is set properly. We propose an adaptive regularization method based on weighted total least squares (WTLS). This approach assumes that errors exist in both the regularization matrix and observation, and an iterative algorithm is used to solve the solution. A weight coefficient is used to balance the regularization matrix residual and the observation residual. An experiment using four slip patterns was carried out to validate the proposed method. The results show that the proposed regularization method can derive a smaller MSE than uniform regularization and resolution-based adaptive regularization, and the improvement in MSE is more significant for slip patterns with low-resolution slip patches. In this paper, we apply the proposed regularization method to study the slip distribution of the 2011 Mw 9.0 Tohoku earthquake. The retrieved slip distribution is less smooth and more detailed than the one retrieved with the uniform regularization method, and is closer to the existing slip model from joint inversion of the geodetic and seismic data.

  20. On the four-dimensional formulation of dimensionally regulated amplitudes

    NASA Astrophysics Data System (ADS)

    Fazio, A. R.; Mastrolia, P.; Mirabella, E.; Torres Bobadilla, W. J.

    2014-12-01

    Elaborating on the four-dimensional helicity scheme, we propose a pure four-dimensional formulation (FDF) of the -dimensional regularization of one-loop scattering amplitudes. In our formulation particles propagating inside the loop are represented by massive internal states regulating the divergences. The latter obey Feynman rules containing multiplicative selection rules which automatically account for the effects of the extra-dimensional regulating terms of the amplitude. We present explicit representations of the polarization and helicity states of the four-dimensional particles propagating in the loop. They allow for a complete, four-dimensional, unitarity-based construction of -dimensional amplitudes. Generalized unitarity within the FDF does not require any higher-dimensional extension of the Clifford and the spinor algebra. Finally we show how the FDF allows for the recursive construction of -dimensional one-loop integrands, generalizing the four-dimensional open-loop approach.

  1. Regularized image system for Stokes flow outside a solid sphere

    NASA Astrophysics Data System (ADS)

    Wróbel, Jacek K.; Cortez, Ricardo; Varela, Douglas; Fauci, Lisa

    2016-07-01

    The image system for a three-dimensional flow generated by regularized forces outside a solid sphere is formulated and implemented as an extension of the method of regularized Stokeslets. The method is based on replacing a point force given by a delta distribution with a smooth localized function and deriving the exact velocity field produced by the forcing. In order to satisfy zero-flow boundary conditions at a solid sphere, the image system for singular Stokeslets is generalized to give exact cancellation of the regularized flow at the surface of the sphere. The regularized image system contains the same elements as the singular counterpart but with coefficients that depend on a regularization parameter. As this parameter vanishes, the expressions reduce to the image system of the singular Stokeslet. The expression relating force and velocity can be inverted to compute the forces that generate a given velocity boundary condition elsewhere in the flow. We present several examples within the context of biological flows at the microscale in order to validate and highlight the usefulness of the image system in computations.

  2. Knowledge and regularity in planning

    NASA Technical Reports Server (NTRS)

    Allen, John A.; Langley, Pat; Matwin, Stan

    1992-01-01

    The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.

  3. Creating Two-Dimensional Nets of Three-Dimensional Shapes Using "Geometer's Sketchpad"

    ERIC Educational Resources Information Center

    Maida, Paula

    2005-01-01

    This article is about a computer lab project in which prospective teachers used Geometer's Sketchpad software to create two-dimensional nets for three-dimensional shapes. Since this software package does not contain ready-made tools for creating non-regular or regular polygons, the students used prior knowledge and geometric facts to create their…

  4. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  5. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  6. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  7. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  8. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  9. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  10. Natural frequency of regular basins

    NASA Astrophysics Data System (ADS)

    Tjandra, Sugih S.; Pudjaprasetya, S. R.

    2014-03-01

    Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.

  11. Pairing effect and misleading regularity

    NASA Astrophysics Data System (ADS)

    Al-Sayed, A.

    2015-11-01

    We study the nearest neighbor spacing distribution of energy levels of even-even nuclei classified according to their reduced electric quadrupole transition probability B (E2) ↑ using the available experimental data. We compare between Brody, and Abul-Magd distributions that extract the degree of chaoticity within nuclear dynamics. The results show that Abul-Magd parameter f can represents the chaotic behavior in more acceptable way than Brody, especially if a statistically significant study is desired. A smooth transition from chaos to order is observed as B (E2) ↑ increases. An apparent regularity was located at the second interval, namely: at 0.05 ≤ B (E2) < 0.1 in e2b2 units, and at 10 ≤ B (E2) < 15 in Weisskopf unit. Finally, the chaotic behavior parameterized in terms of B (E2) ↑ does not depend on the unit used.

  12. Wave dynamics of regular and chaotic rays

    SciTech Connect

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space.

  13. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  14. Energy Scaling Law for the Regular Cone

    NASA Astrophysics Data System (ADS)

    Olbermann, Heiner

    2016-04-01

    We consider a thin elastic sheet in the shape of a disk whose reference metric is that of a singular cone. That is, the reference metric is flat away from the center and has a defect there. We define a geometrically fully nonlinear free elastic energy and investigate the scaling behavior of this energy as the thickness h tends to 0. We work with two simplifying assumptions: Firstly, we think of the deformed sheet as an immersed 2-dimensional Riemannian manifold in Euclidean 3-space and assume that the exponential map at the origin (the center of the sheet) supplies a coordinate chart for the whole manifold. Secondly, the energy functional penalizes the difference between the induced metric and the reference metric in L^∞ (instead of, as is usual, in L^2). Under these assumptions, we show that the elastic energy per unit thickness of the regular cone in the leading order of h is given by C^*h^2|log h|, where the value of C^* is given explicitly.

  15. Existence, uniqueness, and equivalence theorems for magnetic monopoles in general (4p-1)-dimensional Yang-Mills theory

    SciTech Connect

    Gao Zhifeng; Zhang Jing

    2009-04-15

    In this paper, we use the method of calculus of variations to establish the existence of energy-minimizing radially symmetric magnetic monopole solutions in the general (4p-1)-dimensional Yang-Mills gauge field theory developed recently by Radu and Tchrakian. We also show that these solutions are either self-dual or anti-self-dual and, hence, unique. Our study extends the existence work of Belavin, Polyakov, Schwartz, and Tyupin and the equivalence and uniqueness work of Maison in three dimensions and the work of Yang in seven dimensions to the situation of arbitrary (4p-1) dimensions.

  16. Regularity vs genericity in the perception of collinearity.

    PubMed

    Feldman, J

    1996-01-01

    The perception of collinearity is investigated, with the focus on the minimal case of three dots. As suggested previously, from the standpoint of probabilistic inference, the observer must classify each dot triplet as having arisen either from a one-dimensional curvilinear process or from a two-dimensional patch. The normative distributions of triplets arising from these two classes are unavailable to the observer, and are in fact somewhat counterintuitive. Hence in order to classify triplets, the observer invents distributions for each of the two opposed types, 'regular' (collinear) triplets and 'generic' (ie not regular) triplets. The collinear prototype is centered at 0 degree (ie perfectly straight), whereas the generic prototype, contrary to the normative statistics, is centered at 120 degrees away from straight-apparently because this is the point most distant in triplet space from straight and thus creates the maximum possible contrast between the two prototypes. By default, these two processes are assumed to be equiprobable in the environment. An experiment designed to investigate how subjects' judgments are affected by conspicuous environmental deviations from this assumption is reported. The results suggest that observers react by elevating or depressing the expected probability of the generic prototype relative to the regular one, leaving the prototype structure otherwise intact. PMID:8804096

  17. Simulation Of Attenuation Regularity Of Detonation Wave In Pmma

    NASA Astrophysics Data System (ADS)

    Lan, Wei; Xiaomian, Hu

    2012-03-01

    Polymethyl methacrylate (PMMA) is often used as clapboard or protective medium in the parameter measurement of detonation wave propagation. Theoretical and experimental researches show that the pressure of shock wave in condensed material has the regularity of exponential attenuation with the distance of propagation. Simulation of detonation produced shock wave propagation in PMMA was conducted using a two-dimensional Lagrangian computational fluid dynamics program, and results were compared with the experimental data. Different charge diameters and different angles between the direction of detonation wave propagation and the normal direction of confined boundary were considered during the calculation. Results show that the detonation produced shock wave propagation in PMMA accords with the exponential regularity of shock wave attenuation in condensed material, and several factors are relevant to the attenuation coefficient, such as charge diameter and interface angle.

  18. Simulation of attenuation regularity of detonation wave in PMMA

    NASA Astrophysics Data System (ADS)

    Lan, Wei; Xiaomian, Hu

    2011-06-01

    Polymethyl methacrylate (PMMA) is often used as clapboard or protective medium in the parameter measurement of detonation wave propagation, due to its similar shock impedance with the explosive. Theoretical and experimental research show that the pressure of shock wave in condensed material has the regularity of exponential attenuation with the distance of propagation. Simulation of detonation wave propagation in PMMA is conducted using a two-dimensional Lagrangian computational fluid dynamics program, and results are compared with the experimental data. Different charge diameters and different angles between the direction of detonation wave propagation and the normal direction of confined boundary are considered during the calculation. Results show that the detonation wave propagation in PMMA accords with the exponential regularity of shock wave attenuation in condensed material, and several factors are relevant to the attenuation coefficient, such as charge diameter and interface angle.

  19. Manifestly scale-invariant regularization and quantum effective operators

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2016-05-01

    Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.

  20. Perfect state transfer over distance-regular spin networks

    SciTech Connect

    Jafarizadeh, M. A.; Sufiani, R.

    2008-02-15

    Christandl et al. have noted that the d-dimensional hypercube can be projected to a linear chain with d+1 sites so that, by considering fixed but different couplings between the qubits assigned to the sites, the perfect state transfer (PST) can be achieved over arbitrarily long distances in the chain [Phys. Rev. Lett. 92, 187902 (2004); Phys. Rev. A 71, 032312 (2005)]. In this work we consider distance-regular graphs as spin networks and note that any such network (not just the hypercube) can be projected to a linear chain and so can allow PST over long distances. We consider some particular spin Hamiltonians which are the extended version of those of Christandl et al. Then, by using techniques such as stratification of distance-regular graphs and spectral analysis methods, we give a procedure for finding a set of coupling constants in the Hamiltonians so that a particular state initially encoded on one site will evolve freely to the opposite site without any dynamical control, i.e., we show how to derive the parameters of the system so that PST can be achieved. It is seen that PST is only allowed in distance-regular spin networks for which, starting from an arbitrary vertex as reference vertex (prepared in the initial state which we wish to transfer), the last stratum of the networks with respect to the reference state contains only one vertex; i.e., stratification of these networks plays an important role which determines in which kinds of networks and between which vertices of them, PST can be allowed. As examples, the cycle network with even number of vertices and d-dimensional hypercube are considered in details and the method is applied for some important distance-regular networks.

  1. Digital image correlation involves an inverse problem: A regularization scheme based on subset size constraint

    NASA Astrophysics Data System (ADS)

    Zhan, Qin; Yuan, Yuan; Fan, Xiangtao; Huang, Jianyong; Xiong, Chunyang; Yuan, Fan

    2016-06-01

    Digital image correlation (DIC) is essentially implicated in a class of inverse problem. Here, a regularization scheme is developed for the subset-based DIC technique to effectively inhibit potential ill-posedness that likely arises in actual deformation calculations and hence enhance numerical stability, accuracy and precision of correlation measurement. With the aid of a parameterized two-dimensional Butterworth window, a regularized subpixel registration strategy is established, in which the amount of speckle information introduced to correlation calculations may be weighted through equivalent subset size constraint. The optimal regularization parameter associated with each individual sampling point is determined in a self-adaptive way by numerically investigating the curve of 2-norm condition number of coefficient matrix versus the corresponding equivalent subset size, based on which the regularized solution can eventually be obtained. Numerical results deriving from both synthetic speckle images and actual experimental images demonstrate the feasibility and effectiveness of the set of newly-proposed regularized DIC algorithms.

  2. Efficient determination of multiple regularization parameters in a generalized L-curve framework

    NASA Astrophysics Data System (ADS)

    Belge, Murat; Kilmer, Misha E.; Miller, Eric L.

    2002-08-01

    The selection of multiple regularization parameters is considered in a generalized L-curve framework. Multiple-dimensional extensions of the L-curve for selecting multiple regularization parameters are introduced, and a minimum distance function (MDF) is developed for approximating the regularization parameters corresponding to the generalized corner of the L-hypersurface. For the single-parameter (i.e. L-curve) case, it is shown through a model that the regularization parameters minimizing the MDF essentially maximize the curvature of the L-curve. Furthermore, for both the single-and multiple-parameter cases the MDF approach leads to a simple fixed-point iterative algorithm for computing regularization parameters. Examples indicate that the algorithm converges rapidly thereby making the problem of computing parameters according to the generalized corner of the L-hypersurface computationally tractable.

  3. Parameter fitting in three-flavor Nambu-Jona-Lasinio model with various regularizations

    NASA Astrophysics Data System (ADS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2016-05-01

    We study the three-flavor Nambu-Jona-Lasinio model with various regularization procedures. We perform parameter fitting in each regularization and apply the obtained parameter sets to evaluate various physical quantities, several light meson masses, decay constant and the topological susceptibility. The model parameters are adopted even at very high cutoff scale compare to the hadronic scale to study the asymptotic behavior of the model. It is found that all the regularization methods except for the dimensional one actually lead reliable physical predictions for the kaon decay constant, sigma meson mass and topological susceptibility without restricting the ultra-violet cutoff below the hadronic scale.

  4. Higher spin black holes in three dimensions: Remarks on asymptotics and regularity

    NASA Astrophysics Data System (ADS)

    Bañados, Máximo; Canto, Rodrigo; Theisen, Stefan

    2016-07-01

    In the context of (2 +1 )-dimensional S L (N ,R )×S L (N ,R ) Chern-Simons theory we explore issues related to regularity and asymptotics on the solid torus, for stationary and circularly symmetric solutions. We display and solve all necessary conditions to ensure a regular metric and metriclike higher spin fields. We prove that holonomy conditions are necessary but not sufficient conditions to ensure regularity, and that Hawking conditions do not necessarily follow from them. Finally we give a general proof that once the chemical potentials are turned on—as demanded by regularity—the asymptotics cannot be that of Brown-Henneaux.

  5. Wavelet Regularization Per Nullspace Shuttle

    NASA Astrophysics Data System (ADS)

    Charléty, J.; Nolet, G.; Sigloch, K.; Voronin, S.; Loris, I.; Simons, F. J.; Daubechies, I.; Judd, S.

    2010-12-01

    Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization in wavelet space is a promising strategy to deal with the very heterogeneous data coverage in the Earth without sacrificing detail in the solution where this is resolved (see Loris et al., abstract this session). However, L1-norm minimizations are nonlinear, and pose problems of convergence speed when applied to large data sets. In an effort to speed up computations we investigate the application of the nullspace shuttle (Deal and Nolet, GJI 1996). The nullspace shuttle is a filter that adds components from the nullspace to the minimum norm solution so as to have the model satisfy additional conditions not imposed by the data. In our case, the nullspace shuttle projects the model on a truncated basis of wavelets. The convergence of this strategy is unproven, in contrast to algorithms using Landweber iteration or one of its variants, but initial computations using a very large data base give reason for optimism. We invert 430,554 P delay times measured by cross-correlation in different frequency windows. The data are dominated by observations with US Array, leading to a major discrepancy in the resolution beneath North America and the rest of the world. This is a subset of the data set inverted by Sigloch et al (Nature Geosci, 2008), excluding only a small number of ISC delays at short distance and all amplitude data. The model is a cubed Earth model with 3,637,248 voxels spanning mantle and crust, with a resolution everywhere better than 70 km, to which 1912 event corrections are added. In each iteration we determine the optimal solution by a least squares inversion with minimal damping, after which we regularize the model in wavelet space. We then compute the residual data vector (after an intermediate scaling step), and solve for a model correction until a satisfactory chi-square fit for the truncated model is obtained. We present our final results on convergence as well as a

  6. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... advances without approval of the NCUA Board for a period of six months after becoming a member. This subsection shall not apply to any credit union which becomes a Regular member of the Facility within six... member of the Facility at any time within six months prior to becoming a Regular member of the Facility....

  7. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  8. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-04-01

    Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.

  9. On regularizations of the Dirac delta distribution

    NASA Astrophysics Data System (ADS)

    Hosseini, Bamdad; Nigam, Nilima; Stockie, John M.

    2016-01-01

    In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions SH to a singular term S as a parameter H (associated with the support size of SH) shrinks to zero. We characterize this convergence in both the weak-* topology of distributions and a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.

  10. Manifold regularized non-negative matrix factorization with label information

    NASA Astrophysics Data System (ADS)

    Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin

    2016-03-01

    Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.

  11. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  12. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  13. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    SciTech Connect

    Gao, J. M.; Liu, Y.; Li, W.; Lu, J.; Dong, Y. B.; Xia, Z. W.; Yi, P.; Yang, Q. W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.

  14. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A.

    PubMed

    Gao, J M; Liu, Y; Li, W; Lu, J; Dong, Y B; Xia, Z W; Yi, P; Yang, Q W

    2013-09-01

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges. PMID:24089825

  15. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  16. Wavelet regularization of the 2D incompressible Euler equations

    NASA Astrophysics Data System (ADS)

    Nguyen van Yen, Romain; Farge, Marie; Schneider, Kai

    2009-11-01

    We examine the viscosity dependence of the solutions of two-dimensional Navier-Stokes equations in periodic and wall-bounded domains, for Reynolds numbers varying from 10^3 to 10^7. We compare the Navier-Stokes solutions to those of the regularized two-dimensional Euler equations. The regularization is performed by applying at each time step the wavelet-based CVS filter (Farge et al., Phys. Fluids, 11, 1999), which splits turbulent fluctuations into coherent and incoherent contributions. We find that for Reynolds 10^5 the dissipation of coherent enstrophy tends to become independent of Reynolds, while the dissipation of total enstrophy decays to zero logarithmically with Reynolds. In the wall-bounded case, we observe an additional production of enstrophy at the wall. As a result, coherent enstrophy diverges when Reynolds tends to infinity, but its time derivative seems to remain bounded independently of Reynolds. This indicates that a balance may have been established between coherent enstrophy dissipation and coherent enstrophy production at the wall. The Reynolds number for which the dissipation of coherent enstrophy becomes independent on the Reynolds number is proposed to define the onset of the fully-turbulent regime.

  17. Regular black holes and noncommutative geometry inspired fuzzy sources

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinpei

    2016-05-01

    We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the space-time dimensions, and the existence of a void in the vicinity of the center of the space-time is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it.

  18. Oseledets Regularity Functions for Anosov Flows

    NASA Astrophysics Data System (ADS)

    Simić, Slobodan N.

    2011-07-01

    Oseledets regularity functions quantify the deviation of the growth associated with a dynamical system along its Lyapunov bundles from the corresponding uniform exponential growth. The precise degree of regularity of these functions is unknown. We show that for every invariant Lyapunov bundle of a volume preserving Anosov flow on a closed smooth Riemannian manifold, the corresponding Oseledets regularity functions are in L p ( m), for some p > 0, where m is the probability measure defined by the volume form. We prove an analogous result for essentially bounded cocycles over volume preserving Anosov flows.

  19. Analysis of regularized inversion of data corrupted by white Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-04-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is \\begin{eqnarray*} m(x) = Au(x) + \\delta \\varepsilon (x), \\end{eqnarray*} where δ > 0 is the noise magnitude. If ɛ was an L2-function, Tikhonov regularization gives an estimate \\begin{eqnarray*} T_\\alpha (m) = \\mathop {{arg\\, min}}_{u\\in H^r} \\big \\lbrace \\Vert A u-m\\Vert _{L^2}^2+ \\alpha \\Vert u\\Vert _{H^r}^2 \\big \\rbrace \\end{eqnarray*} for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm \\Vert u\\Vert _{H^r} covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L2, but do belong to Hs with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed.

  20. Nonminimal black holes with regular electric field

    NASA Astrophysics Data System (ADS)

    Balakin, Alexander B.; Zayats, Alexei E.

    2015-05-01

    We discuss the problem of identification of coupling constants, which describe interactions between photons and spacetime curvature, using exact regular solutions to the extended equations of the nonminimal Einstein-Maxwell theory. We argue the idea that three nonminimal coupling constants in this theory can be reduced to the single guiding parameter, which plays the role of nonminimal radius. We base our consideration on two examples of exact solutions obtained earlier in our works: the first of them describes a nonminimal spherically symmetric object (star or black hole) with regular radial electric field; the second example represents a nonminimal Dirac-type object (monopole or black hole) with regular metric. We demonstrate that one of the inflexion points of the regular metric function identifies a specific nonminimal radius, thus marking the domain of dominance of nonminimal interactions.

  1. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  2. Regular Exercise May Boost Prostate Cancer Survival

    MedlinePlus

    ... nih.gov/medlineplus/news/fullstory_158374.html Regular Exercise May Boost Prostate Cancer Survival Study found that ... HealthDay News) -- Sticking to a moderate or intense exercise regimen may improve a man's odds of surviving ...

  3. Regular Exercise: Antidote for Deadly Diseases?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160326.html Regular Exercise: Antidote for Deadly Diseases? High levels of physical ... Aug. 9, 2016 (HealthDay News) -- Getting lots of exercise may reduce your risk for five common diseases, ...

  4. Parallelization of irregularly coupled regular meshes

    NASA Technical Reports Server (NTRS)

    Chase, Craig; Crowley, Kay; Saltz, Joel; Reeves, Anthony

    1992-01-01

    Regular meshes are frequently used for modeling physical phenomena on both serial and parallel computers. One advantage of regular meshes is that efficient discretization schemes can be implemented in a straight forward manner. However, geometrically-complex objects, such as aircraft, cannot be easily described using a single regular mesh. Multiple interacting regular meshes are frequently used to describe complex geometries. Each mesh models a subregion of the physical domain. The meshes, or subdomains, can be processed in parallel, with periodic updates carried out to move information between the coupled meshes. In many cases, there are a relatively small number (one to a few dozen) subdomains, so that each subdomain may also be partitioned among several processors. We outline a composite run-time/compile-time approach for supporting these problems efficiently on distributed-memory machines. These methods are described in the context of a multiblock fluid dynamics problem developed at LaRC.

  5. Blind Poissonian images deconvolution with framelet regularization.

    PubMed

    Fang, Houzhang; Yan, Luxin; Liu, Hai; Chang, Yi

    2013-02-15

    We propose a maximum a posteriori blind Poissonian images deconvolution approach with framelet regularization for the image and total variation (TV) regularization for the point spread function. Compared with the TV based methods, our algorithm not only suppresses noise effectively but also recovers edges and detailed information. Moreover, the split Bregman method is exploited to solve the resulting minimization problem. Comparative results on both simulated and real images are reported. PMID:23455078

  6. Regularized CT reconstruction on unstructured grid

    NASA Astrophysics Data System (ADS)

    Chen, Yun; Lu, Yao; Ma, Xiangyuan; Xu, Yuesheng

    2016-04-01

    Computed tomography (CT) is an ill-posed problem. Reconstruction on unstructured grid reduces the computational cost and alleviates the ill-posedness by decreasing the dimension of the solution space. However, there was no systematic study on edge-preserving regularization methods for CT reconstruction on unstructured grid. In this work, we propose a novel regularization method for CT reconstruction on unstructured grid, such as triangular or tetrahedral meshes generated from the initial images reconstructed via analysis reconstruction method (e.g., filtered back-projection). The proposed regularization method is modeled as a three-term optimization problem, containing a weighted least square fidelity term motivated by the simultaneous algebraic reconstruction technique (SART). The related cost function contains two non-differentiable terms, which bring difficulty to the development of the fast solver. A fixed-point proximity algorithm with SART is developed for solving the related optimization problem, and accelerating the convergence. Finally, we compare the regularized CT reconstruction method to SART with different regularization methods. Numerical experiments demonstrated that the proposed regularization method on unstructured grid is effective to suppress noise and preserve edge features.

  7. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-01-01

    Breit, Gupta, and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the fifth-time of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, difficulties arise which, in general, ruins the scheme. A successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest.

  8. k-Regular maps into Euclidean spaces and the Borsuk-Boltyanskii problem

    SciTech Connect

    Bogatyi, S A

    2002-02-28

    The Borsuk-Boltyanskii problem is solved for odd k, that is, the minimum dimension of a Euclidean space is determined into which any n-dimensional polyhedron (compactum) can be k-regularly embedded. A new lower bound is obtained for even k.

  9. A maximal regularity estimate for the non-stationary Stokes equation in the strip

    NASA Astrophysics Data System (ADS)

    Choffrut, Antoine; Nobili, Camilla; Otto, Felix

    2016-04-01

    In a d-dimensional strip with d ≥ 2, we study the non-stationary Stokes equation with no-slip boundary condition in the lower and upper plates and periodic boundary condition in the horizontal directions. In this paper we establish a new maximal regularity estimate in the real interpolation norm

  10. Usual Source of Care in Preventive Service Use: A Regular Doctor versus a Regular Site

    PubMed Central

    Xu, K Tom

    2002-01-01

    Objective To compare the effects of having a regular doctor and having a regular site on five preventive services, controlling for the endogeneity of having a usual source of care. Data Source The Medical Expenditure Panel Survey 1996 conducted by the Agency for Healthcare Research and Quality and the National Center for Health Statistics. Study Design Mammograms, pap smears, blood pressure checkups, cholesterol level checkups, and flu shots were examined. A modified behavioral model framework was presented, which controlled for the endogeneity of having a usual source of care. Based on this framework, a two-equation empirical model was established to predict the probabilities of having a regular doctor and having a regular site, and use of each type of preventive service. Principal Findings Having a regular doctor was found to have a greater impact than having a regular site on discretional preventive services, such as blood pressure and cholesterol level checkups. No statistically significant differences were found between the effects a having a regular doctor and having a regular site on the use of flu shots, pap smears, and mammograms. Among the five preventive services, having a usual source of care had the greatest impact on cholesterol level checkups and pap smears. Conclusions Promoting a stable physician–patient relationship can improve patients’ timely receipt of clinical prevention. For certain preventive services, having a regular doctor is more effective than having a regular site. PMID:12546284

  11. Analysis of a Regularized Bingham Model with Pressure-Dependent Yield Stress

    NASA Astrophysics Data System (ADS)

    El Khouja, Nazek; Roquet, Nicolas; Cazacliu, Bogdan

    2015-12-01

    The goal of this article is to provide some essential results for the solution of a regularized viscoplastic frictional flow model adapted from the extensive mathematical analysis of the Bingham model. The Bingham model is a standard for the description of viscoplastic flows and it is widely used in many application areas. However, wet granular viscoplastic flows necessitate the introduction of additional non-linearities and coupling between velocity and stress fields. This article proposes a step toward a frictional coupling, characterized by a dependence of the yield stress to the pressure field. A regularized version of this viscoplastic frictional model is analysed in the framework of stationary flows. Existence, uniqueness and regularity are investigated, as well as finite-dimensional and algorithmic approximations. It is shown that the model can be solved and approximated as far as a frictional parameter is small enough. Getting similar results for the non-regularized model remains an issue. Numerical investigations are postponed to further works.

  12. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  13. Regular black holes in f (R ) gravity coupled to nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel E.; Junior, Ednaldo L. B.; Marques, Glauber T.; Zanchin, Vilson T.

    2016-07-01

    We obtain a class of regular black hole solutions in four-dimensional f (R ) gravity, R being the curvature scalar, coupled to a nonlinear electromagnetic source. The metric formalism is used and static spherically symmetric spacetimes are assumed. The resulting f (R ) and nonlinear electrodynamics functions are characterized by a one-parameter family of solutions which are generalizations of known regular black holes in general relativity coupled to nonlinear electrodynamics. The related regular black holes of general relativity are recovered when the free parameter vanishes, in which case one has f (R )∝R . We analyze the regularity of the solutions and also show that there are particular solutions that violate only the strong energy condition.

  14. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  15. Oxygen saturation resolution influences regularity measurements.

    PubMed

    Garde, Ainara; Karlen, Walter; Dehkordi, Parastoo; Ansermino, J Mark; Dumont, Guy A

    2014-01-01

    The measurement of regularity in the oxygen saturation (SpO(2)) signal has been suggested for use in identifying subjects with sleep disordered breathing (SDB). Previous work has shown that children with SDB have lower SpO(2) regularity than subjects without SDB (NonSDB). Regularity was measured using non-linear methods like approximate entropy (ApEn), sample entropy (SamEn) and Lempel-Ziv (LZ) complexity. Different manufacturer's pulse oximeters provide SpO(2) at various resolutions and the effect of this resolution difference on SpO(2) regularity, has not been studied. To investigate this effect, we used the SpO(2) signal of children with and without SDB, recorded from the Phone Oximeter (0.1% resolution) and the same SpO(2) signal rounded to the nearest integer (artificial 1% resolution). To further validate the effect of rounding, we also used the SpO(2) signal (1% resolution) recorded simultaneously from polysomnography (PSG), as a control signal. We estimated SpO(2) regularity by computing the ApEn, SamEn and LZ complexity, using a 5-min sliding window and showed that different resolutions provided significantly different results. The regularity calculated using 0.1% SpO(2) resolution provided no significant differences between SDB and NonSDB. However, the artificial 1% resolution SpO(2) provided significant differences between SDB and NonSDB, showing a more random SpO(2) pattern (lower SpO(2) regularity) in SDB children, as suggested in the past. Similar results were obtained with the SpO(2) recorded from PSG (1% resolution), which further validated that this SpO(2) regularity change was due to the rounding effect. Therefore, the SpO(2) resolution has a great influence in regularity measurements like ApEn, SamEn and LZ complexity that should be considered when studying the SpO(2) pattern in children with SDB. PMID:25570437

  16. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  17. Assessment of regularization techniques for electrocardiographic imaging

    PubMed Central

    Milanič, Matija; Jazbinšek, Vojko; MacLeod, Robert S.; Brooks, Dana H.; Hren, Rok

    2014-01-01

    A widely used approach to solving the inverse problem in electrocardiography involves computing potentials on the epicardium from measured electrocardiograms (ECGs) on the torso surface. The main challenge of solving this electrocardiographic imaging (ECGI) problem lies in its intrinsic ill-posedness. While many regularization techniques have been developed to control wild oscillations of the solution, the choice of proper regularization methods for obtaining clinically acceptable solutions is still a subject of ongoing research. However there has been little rigorous comparison across methods proposed by different groups. This study systematically compared various regularization techniques for solving the ECGI problem under a unified simulation framework, consisting of both 1) progressively more complex idealized source models (from single dipole to triplet of dipoles), and 2) an electrolytic human torso tank containing a live canine heart, with the cardiac source being modeled by potentials measured on a cylindrical cage placed around the heart. We tested 13 different regularization techniques to solve the inverse problem of recovering epicardial potentials, and found that non-quadratic methods (total variation algorithms) and first-order and second-order Tikhonov regularizations outperformed other methodologies and resulted in similar average reconstruction errors. PMID:24369741

  18. Shadow of rotating regular black holes

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.

    2016-05-01

    We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.

  19. Strong regularizing effect of integrable systems

    SciTech Connect

    Zhou, Xin

    1997-11-01

    Many time evolution problems have the so-called strong regularization effect, that is, with any irregular initial data, as soon as becomes greater than 0, the solution becomes C{sup {infinity}} for both spacial and temporal variables. This paper studies 1 x 1 dimension integrable systems for such regularizing effect. In the work by Sachs, Kappler [S][K], (see also earlier works [KFJ] and [Ka]), strong regularizing effect is proved for KdV with rapidly decaying irregular initial data, using the inverse scattering method. There are two equivalent Gel`fand-Levitan-Marchenko (GLM) equations associated to an inverse scattering problem, one is normalized at x = {infinity} and another at x = {infinity}. The method of [S][K] relies on the fact that the KdV waves propagate only in one direction and therefore one of the two GLM equations remains normalized and can be differentiated infinitely many times. 15 refs.

  20. Regularized image recovery in scattering media.

    PubMed

    Schechner, Yoav Y; Averbuch, Yuval

    2007-09-01

    When imaging in scattering media, visibility degrades as objects become more distant. Visibility can be significantly restored by computer vision methods that account for physical processes occurring during image formation. Nevertheless, such recovery is prone to noise amplification in pixels corresponding to distant objects, where the medium transmittance is low. We present an adaptive filtering approach that counters the above problems: while significantly improving visibility relative to raw images, it inhibits noise amplification. Essentially, the recovery formulation is regularized, where the regularization adapts to the spatially varying medium transmittance. Thus, this regularization does not blur close objects. We demonstrate the approach in atmospheric and underwater experiments, based on an automatic method for determining the medium transmittance. PMID:17627052

  1. [Why regular physical activity favors longevity].

    PubMed

    Pentimone, F; Del Corso, L

    1998-06-01

    Regular physical exercise is useful at all ages. In the elderly, even a gentle exercise programme consisting of walking, bicycling, playing golf if performed constantly increases longevity by preventing the onset of the main diseases or alleviating the handicaps they may have caused. Cardiovascular diseases, which represent the main cause of death in the elderly, and osteoporosis, a disabling disease potentially capable of shortening life expectancy, benefit from physical exercise which if performed regularly well before the start of old age may help to prevent them. Over the past few years there has been growing evidence of the concrete protection offered against neoplasia and even the ageing process itself. PMID:9739351

  2. Learning with regularizers in multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Saad, David; Rattray, Magnus

    1998-02-01

    We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units that may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

  3. Demosaicing as the problem of regularization

    NASA Astrophysics Data System (ADS)

    Kunina, Irina; Volkov, Aleksey; Gladilin, Sergey; Nikolaev, Dmitry

    2015-12-01

    Demosaicing is the process of reconstruction of a full-color image from Bayer mosaic, which is used in digital cameras for image formation. This problem is usually considered as an interpolation problem. In this paper, we propose to consider the demosaicing problem as a problem of solving an underdetermined system of algebraic equations using regularization methods. We consider regularization with standard l1/2-, l1 -, l2- norms and their effect on quality image reconstruction. The experimental results showed that the proposed technique can both be used in existing methods and become the base for new ones

  4. REGULAR VERSUS DIFFUSIVE PHOTOSPHERIC FLUX CANCELLATION

    SciTech Connect

    Litvinenko, Yuri E.

    2011-04-20

    Observations of photospheric flux cancellation on the Sun imply that cancellation can be a diffusive rather than regular process. A criterion is derived, which quantifies the parameter range in which diffusive photospheric cancellation should occur. Numerical estimates show that regular cancellation models should be expected to give a quantitatively accurate description of photospheric cancellation. The estimates rely on a recently suggested scaling for a turbulent magnetic diffusivity, which is consistent with the diffusivity measurements on spatial scales varying by almost two orders of magnitude. Application of the turbulent diffusivity to large-scale dispersal of the photospheric magnetic flux is discussed.

  5. A new regularity-based algorithm for characterizing heterogeneities from digitized core image

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Zaourar, Naima; Hachay, Olga

    2014-05-01

    The two-dimensional multifractional Brownian motion (2D-mBm) is receiving an increasing interest in image processing. However, one difficulty inherent to this fractal model is the estimation of its local Hölderian regularity function. In this paper, we suggest a new estimator of the local Hölder exponent of 2D-mBm paths. The suggested algorithm has been first tested on synthetic 2D-mBm paths, then implemented on digitized image data of a core extracted from an Algerian borehole. The obtained regularity map shows a clear correlation with the geological features observed on the investigated core. These lithological discontinuities are reflected by local variations of the Hölder exponent value. However, no clear relationship can be drawn between regularity and digitized data. To conclude, the suggested algorithm may be a powerful tool for exploring heterogeneities from core images using the regularity exponents. Keywords: core image, two-dimensional multifractional Brownian motion, fractal, regularity.

  6. Regularized Data Assimilation and Fusion of non-Gaussian States Exhibiting Sparse Prior in Transform Domains

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2012-12-01

    Improved estimation of geophysical state variables in a noisy environment from down-sampled observations and background model forecasts has been the subject of growing research in the past decades. Often the number of degrees of freedom in high-dimensional non-Gaussian natural states is quite small compared to their ambient dimensionality, a property often revealed as a sparse representation in an appropriately chosen domain. Aiming to increase the hydrometeorological forecast skill and motivated by wavelet-domain sparsity of some land-surface geophysical states, new framework is presented that recast the classical variational data assimilation/fusion (DA/DF) problem via L_1 regularization in the wavelet domain. Our results suggest that proper regularization can lead to more accurate recovery of a wide range of smooth/non-smooth geophysical states exhibiting remarkable non-Gaussian features. The promise of the proposed framework is demonstrated in multi-sensor satellite and land-based precipitation data fusion, while the regularized DA is performed on the heat equation in a 4D-VAR context, using sparse regularization in the wavelet domain.; ; Top panel: Noisy observations of the linear advection diffusion equation at five consecutive snapshots, middle panel: Classical 4D-VAR and bottom panel: l_1 regularized 4D-VAR with improved results.

  7. The effect of regularization on the reconstruction of ACAR data

    NASA Astrophysics Data System (ADS)

    Weber, J. A.; Ceeh, H.; Hugenschmidt, C.; Leitner, M.; Böni, P.

    2014-04-01

    The Fermi surface, i.e. the two-dimensional surface separating occupied and unoccupied states in k-space, is the defining property of a metal. Full information about its shape is mandatory for identifying nesting vectors or for validating band structure calculations. With the angular correlation of positron-electron annihilation radiation (ACAR) it is easy to get projections of the Fermi surface. Nevertheless it is claimed to be inexact compared to more common methods like the determination based on quantum oscillations or angle-resolved photoemission spectroscopy. In this article we will present a method for reconstructing the Fermi surface from projections with statistically correct data treatment which is able to increase accuracy by introducing different types of regularization.

  8. Regularity for steady periodic capillary water waves with vorticity.

    PubMed

    Henry, David

    2012-04-13

    In the following, we prove new regularity results for two-dimensional steady periodic capillary water waves with vorticity, in the absence of stagnation points. Firstly, we prove that if the vorticity function has a Hölder-continuous first derivative, then the free surface is a smooth curve and the streamlines beneath the surface will be real analytic. Furthermore, once we assume that the vorticity function is real analytic, it will follow that the wave surface profile is itself also analytic. A particular case of this result includes irrotational fluid flow where the vorticity is zero. The property of the streamlines being analytic allows us to gain physical insight into small-amplitude waves by justifying a power-series approach. PMID:22393112

  9. Uncorrelated regularized local Fisher discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Zhan; Ruan, Qiuqi; An, Gaoyun

    2014-07-01

    A local Fisher discriminant analysis can work well for a multimodal problem. However, it often suffers from the undersampled problem, which makes the local within-class scatter matrix singular. We develop a supervised discriminant analysis technique called uncorrelated regularized local Fisher discriminant analysis for image feature extraction. In this technique, the local within-class scatter matrix is approximated by a full-rank matrix that not only solves the undersampled problem but also eliminates the poor impact of small and zero eigenvalues. Statistically uncorrelated features are obtained to remove redundancy. A trace ratio criterion and the corresponding iterative algorithm are employed to globally solve the objective function. Experimental results on four famous face databases indicate that our proposed method is effective and outperforms the conventional dimensionality reduction methods.

  10. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping. PMID:24976795

  11. Regularizing the divergent structure of light-front currents

    SciTech Connect

    Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2001-04-01

    The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.

  12. Spectral analysis of two-dimensional Bose-Hubbard models

    NASA Astrophysics Data System (ADS)

    Fischer, David; Hoffmann, Darius; Wimberger, Sandro

    2016-04-01

    One-dimensional Bose-Hubbard models are well known to obey a transition from regular to quantum-chaotic spectral statistics. We are extending this concept to relatively simple two-dimensional many-body models. Also in two dimensions a transition from regular to chaotic spectral statistics is found and discussed. In particular, we analyze the dependence of the spectral properties on the bond number of the two-dimensional lattices and the applied boundary conditions. For maximal connectivity, the systems behave most regularly in agreement with the applicability of mean-field approaches in the limit of many nearest-neighbor couplings at each site.

  13. Dyslexia in Regular Orthographies: Manifestation and Causation

    ERIC Educational Resources Information Center

    Wimmer, Heinz; Schurz, Matthias

    2010-01-01

    This article summarizes our research on the manifestation of dyslexia in German and on cognitive deficits, which may account for the severe reading speed deficit and the poor orthographic spelling performance that characterize dyslexia in regular orthographies. An only limited causal role of phonological deficits (phonological awareness,…

  14. Starting flow in regular polygonal ducts

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.

    2016-06-01

    The starting flows in regular polygonal ducts of S = 3, 4, 5, 6, 8 sides are determined by the method of eigenfunction superposition. The necessary S-fold symmetric eigenfunctions and eigenvalues of the Helmholtz equation are found either exactly or by boundary point match. The results show the starting time is governed by the first eigenvalue.

  15. 28 CFR 540.44 - Regular visitors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PERSONS IN THE COMMUNITY Visiting Regulations § 540.44 Regular visitors. An inmate desiring to have... ordinarily will be extended to friends and associates having an established relationship with the inmate... of the institution. Exceptions to the prior relationship rule may be made, particularly for...

  16. Regular Classroom Teachers' Perceptions of Mainstreaming Effects.

    ERIC Educational Resources Information Center

    Ringlaben, Ravic P.; Price, Jay R.

    To assess regular classroom teachers' perceptions of mainstreaming, a 22 item questionnaire was completed by 117 teachers (K through 12). Among results were that nearly half of the Ss indicated a lack of preparation for implementing mainstreaming; 47% tended to be very willing to accept aminstreamed students; 42% said mainstreaming was working…

  17. Regularizing cosmological singularities by varying physical constants

    SciTech Connect

    Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl

    2013-02-01

    Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.

  18. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  19. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  20. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  1. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  2. Commitment and Dependence Upon Regular Running.

    ERIC Educational Resources Information Center

    Sachs, Michael L.; Pargman, David

    The linear relationship between intellectual commitment to running and psychobiological dependence upon running is examined. A sample of 540 regular runners (running frequency greater than three days per week for the past year for the majority) was surveyed with a questionnaire. Measures of commitment and dependence on running, as well as…

  3. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING.

    PubMed

    Liu, Meizhu; Vemuri, Baba C

    2011-03-30

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) - used to represent the distribution over the training data and the classification error respectively - to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  4. Generalisation of Regular and Irregular Morphological Patterns.

    ERIC Educational Resources Information Center

    Prasada, Sandeep; and Pinker, Steven

    1993-01-01

    When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call…

  5. Observing Special and Regular Education Classrooms.

    ERIC Educational Resources Information Center

    Hersh, Susan B.

    The paper describes an observation instrument originally developed as a research tool to assess both the special setting and the regular classroom. The instrument can also be used in determining appropriate placement for students with learning disabilities and for programming the transfer of skills learned in the special setting to the regular…

  6. Handicapped Children in the Regular Classroom.

    ERIC Educational Resources Information Center

    Fountain Valley School District, CA.

    Reported was a project in which 60 educable mentally retarded (EMR) and 30 educationally handicapped (EH) elementary school students were placed in regular classrooms to determine whether they could be effectively educated in those settings. Effective education was defined in terms of improvement in reading, mathematics, student and teacher…

  7. Fast Image Reconstruction with L2-Regularization

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar

    2014-01-01

    Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184

  8. Exploring the structural regularities in networks

    NASA Astrophysics Data System (ADS)

    Shen, Hua-Wei; Cheng, Xue-Qi; Guo, Jia-Feng

    2011-11-01

    In this paper, we consider the problem of exploring structural regularities of networks by dividing the nodes of a network into groups such that the members of each group have similar patterns of connections to other groups. Specifically, we propose a general statistical model to describe network structure. In this model, a group is viewed as a hidden or unobserved quantity and it is learned by fitting the observed network data using the expectation-maximization algorithm. Compared with existing models, the most prominent strength of our model is the high flexibility. This strength enables it to possess the advantages of existing models and to overcome their shortcomings in a unified way. As a result, not only can broad types of structure be detected without prior knowledge of the type of intrinsic regularities existing in the target network, but also the type of identified structure can be directly learned from the network. Moreover, by differentiating outgoing edges from incoming edges, our model can detect several types of structural regularities beyond competing models. Tests on a number of real world and artificial networks demonstrate that our model outperforms the state-of-the-art model in shedding light on the structural regularities of networks, including the overlapping community structure, multipartite structure, and several other types of structure, which are beyond the capability of existing models.

  9. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  10. Functional calculus and *-regularity of a class of Banach algebras II

    NASA Astrophysics Data System (ADS)

    Leung, Chi-Wai; Ng, Chi-Keung

    2006-10-01

    In this article, we define a natural Banach *-algebra for a C*-dynamical system (A,G,[alpha]) which is slightly bigger than L1(G;A) (they are the same if A is finite-dimensional). We will show that this algebra is *-regular if G has polynomial growth. The main result in this article extends the two main results in [C.W. Leung, C.K. Ng, Functional calculus and *-regularity of a class of Banach algebras, Proc. Amer. Math. Soc., in press].

  11. The geometric β-function in curved space-time under operator regularization

    SciTech Connect

    Agarwala, Susama

    2015-06-15

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined.

  12. Learning regular expressions for clinical text classification

    PubMed Central

    Bui, Duy Duc An; Zeng-Treitler, Qing

    2014-01-01

    Objectives Natural language processing (NLP) applications typically use regular expressions that have been developed manually by human experts. Our goal is to automate both the creation and utilization of regular expressions in text classification. Methods We designed a novel regular expression discovery (RED) algorithm and implemented two text classifiers based on RED. The RED+ALIGN classifier combines RED with an alignment algorithm, and RED+SVM combines RED with a support vector machine (SVM) classifier. Two clinical datasets were used for testing and evaluation: the SMOKE dataset, containing 1091 text snippets describing smoking status; and the PAIN dataset, containing 702 snippets describing pain status. We performed 10-fold cross-validation to calculate accuracy, precision, recall, and F-measure metrics. In the evaluation, an SVM classifier was trained as the control. Results The two RED classifiers achieved 80.9–83.0% in overall accuracy on the two datasets, which is 1.3–3% higher than SVM's accuracy (p<0.001). Similarly, small but consistent improvements have been observed in precision, recall, and F-measure when RED classifiers are compared with SVM alone. More significantly, RED+ALIGN correctly classified many instances that were misclassified by the SVM classifier (8.1–10.3% of the total instances and 43.8–53.0% of SVM's misclassifications). Conclusions Machine-generated regular expressions can be effectively used in clinical text classification. The regular expression-based classifier can be combined with other classifiers, like SVM, to improve classification performance. PMID:24578357

  13. A comprehensive methodology for algorithm characterization, regularization and mapping into optimal VLSI arrays

    SciTech Connect

    Barada, H.R.

    1989-01-01

    This dissertation provides a fairly comprehensive treatment of a broad class of algorithms as it pertains to systolic implementation. The authors describe some formal algorithmic transformations that can be utilized to map regular and some irregular compute-bound algorithms into the beat fit time-optimal systolic architectures. The resulted architectures can be one-dimensional, two-dimensional, three-dimensional or nonplanar. The methodology detailed in the dissertation employs, like other methods, the concept of dependence vector to order, in space and time, the index points representing the algorithm. However, by differentiating between two types of dependence vectors, the ordering procedure is allowed to be flexible and time optimal. Furthermore, unlike other methodologies, the approach reported here does not put constraints on the topology or dimensionality of the target architecture. The ordered index points are represented by nodes in a diagram called Systolic Precedence Diagram (SPD). The SPD is a form of precedence graph that takes into account the systolic operation requirements of strictly local communications and regular data flow. Therefore, any algorithm with variable dependence vectors has to be transformed into a regular indexed set of computations with local dependencies. This can be done by replacing variable dependence vectors with sets of fixed dependence vectors. The SPD is transformed into an acyclic, labeled, directed graph called the Systolic Directed Graph (SDG). The SDG models the data flow as well as the timing for the execution of the given algorithm on a time-optimal array.

  14. Maximum-likelihood constrained regularized algorithms: an objective criterion for the determination of regularization parameters

    NASA Astrophysics Data System (ADS)

    Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude

    1999-12-01

    We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).

  15. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  16. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  17. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  18. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  19. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  20. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  1. Generalized Higher Degree Total Variation (HDTV) Regularization

    PubMed Central

    Hu, Yue; Ongie, Greg; Ramani, Sathish; Jacob, Mathews

    2015-01-01

    We introduce a family of novel image regularization penalties called generalized higher degree total variation (HDTV). These penalties further extend our previously introduced HDTV penalties, which generalize the popular total variation (TV) penalty to incorporate higher degree image derivatives. We show that many of the proposed second degree extensions of TV are special cases or are closely approximated by a generalized HDTV penalty. Additionally, we propose a novel fast alternating minimization algorithm for solving image recovery problems with HDTV and generalized HDTV regularization. The new algorithm enjoys a ten-fold speed up compared to the iteratively reweighted majorize minimize algorithm proposed in a previous work. Numerical experiments on 3D magnetic resonance images and 3D microscopy images show that HDTV and generalized HDTV improve the image quality significantly compared with TV. PMID:24710832

  2. Charged fermions tunneling from regular black holes

    SciTech Connect

    Sharif, M. Javed, W.

    2012-11-15

    We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.

  3. A regular version of Smilansky model

    SciTech Connect

    Barseghyan, Diana; Exner, Pavel

    2014-04-15

    We discuss a modification of Smilansky model in which a singular potential “channel” is replaced by a regular, below unbounded potential which shrinks as it becomes deeper. We demonstrate that, similarly to the original model, such a system exhibits a spectral transition with respect to the coupling constant, and determine the critical value above which a new spectral branch opens. The result is generalized to situations with multiple potential “channels.”.

  4. A regularization approach to hydrofacies delineation

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  5. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  6. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  7. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  8. Regularization Parameter Selections via Generalized Information Criterion

    PubMed Central

    Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling

    2009-01-01

    We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354

  9. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  10. Regular language constrained sequence alignment revisited.

    PubMed

    Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal

    2011-05-01

    Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, the Regular Expression Constrained Sequence Alignment Problem was introduced, which proposed an O(n²t⁴) time and O(n²t²) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the input non-deterministic automaton. A faster O(n²t³) time algorithm for the same problem was subsequently proposed. In this article, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n²t³)/log t). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense. PMID:21554020