Science.gov

Sample records for dimensionally regularized polyakov

  1. How the Polyakov loop and the regularization affect strangeness and restoration of symmetries at finite T

    SciTech Connect

    Ruivo, M. C.; Costa, P.; Sousa, C. A. de; Hansen, H.

    2010-08-05

    The effects of the Polyakov loop and of a regularization procedure that allows the presence of high momentum quark states at finite temperature is investigated within the Polyakov-loop extended Nambu-Jona-Lasinio model. The characteristic temperatures, as well as the behavior of observables that signal deconfinement and restoration of chiral and axial symmetries, are analyzed, paying special attention to the behavior of strangeness degrees of freedom. We observe that the cumulative effects of the Polyakov loop and of the regularization procedure contribute to a better description of the thermodynamics, as compared with lattice estimations. We find a faster partial restoration of chiral symmetry and the restoration of the axial symmetry appears as a natural consequence of the full recovering of the chiral symmetry that was dynamically broken. These results show the relevance of the effects of the interplay among the Polyakov loop dynamics, the high momentum quark sates and the restoration of the chiral and axial symmetries at finite temperature.

  2. Dimensional regularization in configuration space

    SciTech Connect

    Bollini, C.G. |; Giambiagi, J.J.

    1996-05-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}

  3. Weighted power counting and chiral dimensional regularization

    NASA Astrophysics Data System (ADS)

    Anselmi, Damiano

    2014-06-01

    We define a modified dimensional-regularization technique that overcomes several difficulties of the ordinary technique, and is specially designed to work efficiently in chiral and parity violating quantum field theories, in arbitrary dimensions greater than 2. When the dimension of spacetime is continued to complex values, spinors, vectors and tensors keep the components they have in the physical dimension; therefore, the γ matrices are the standard ones. Propagators are regularized with the help of evanescent higher-derivative kinetic terms, which are of the Majorana type in the case of chiral fermions. If the new terms are organized in a clever way, weighted power counting provides an efficient control on the renormalization of the theory, and allows us to show that the resulting chiral dimensional regularization is consistent to all orders. The new technique considerably simplifies the proofs of properties that hold to all orders, and makes them suitable to be generalized to wider classes of models. Typical examples are the renormalizability of chiral gauge theories and the Adler-Bardeen theorem. The difficulty of explicit computations, on the other hand, may increase.

  4. Dimensional regularization and dimensional reduction in the light cone

    SciTech Connect

    Qiu, J.

    2008-06-15

    We calculate all of the 2 to 2 scattering process in Yang-Mills theory in the light cone gauge, with the dimensional regulator as the UV regulator. The IR is regulated with a cutoff in q{sup +}. It supplements our earlier work, where a Lorentz noncovariant regulator was used, and the final results bear some problems in gauge fixing. Supersymmetry relations among various amplitudes are checked by using the light cone superfields.

  5. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  6. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  7. Matching effective chiral Lagrangians with dimensional and lattice regularizations

    NASA Astrophysics Data System (ADS)

    Niedermayer, F.; Weisz, P.

    2016-04-01

    We compute the free energy in the presence of a chemical potential coupled to a conserved charge in effective O( n) scalar field theory (without explicit symmetry breaking terms) to third order for asymmetric volumes in general d-dimensions, using dimensional (DR) and lattice regularizations. This yields relations between the 4-derivative couplings appearing in the effective actions for the two regularizations, which in turn allows us to translate results, e.g. the mass gap in a finite periodic box in d = 3 + 1 dimensions, from one regularization to the other. Consistency is found with a new direct computation of the mass gap using DR. For the case n = 4 , d = 4 the model is the low-energy effective theory of QCD with N f = 2 massless quarks. The results can thus be used to obtain estimates of low energy constants in the effective chiral Lagrangian from measurements of the low energy observables, including the low lying spectrum of N f = 2 QCD in the δ-regime using lattice simulations, as proposed by Peter Hasenfratz, or from the susceptibility corresponding to the chemical potential used.

  8. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2014-03-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter , to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  9. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2013-05-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space {{R}^3} and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter {α → 0}, to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  10. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D ‑ d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D ‑ 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  11. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead. PMID:25353924

  12. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    PubMed

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification. PMID:26520484

  13. On the Global Regularity of the Two-Dimensional Density Patch for Inhomogeneous Incompressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Liao, Xian; Zhang, Ping

    2016-06-01

    Regarding P.-L. Lions' open question in Oxford Lecture Series in Mathematics and its Applications, Vol. 3 (1996) concerning the propagation of regularity for the density patch, we establish the global existence of solutions to the two-dimensional inhomogeneous incompressible Navier-Stokes system with initial density given by {(1 - η){1}_{{Ω}0} + {1}_{{Ω}0c}} for some small enough constant {η} and some {W^{k+2,p}} domain {Ω0}, with initial vorticity belonging to {L1 \\cap Lp} and with appropriate tangential regularities. Furthermore, we prove that the regularity of the domain {Ω_0} is preserved by time evolution.

  14. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  15. A Regular Tetrahedron Formation Strategy for Swarm Robots in Three-Dimensional Environment

    NASA Astrophysics Data System (ADS)

    Ercan, M. Fikret; Li, Xiang; Liang, Ximing

    A decentralized control method, namely Regular Tetrahedron Formation (RTF), is presented for a swarm of simple robots operating in three-dimensional space. It is based on virtual spring mechanism and enables four neighboring robots to autonomously form a Regular Tetrahedron (RT) regardless of their initial positions. RTF method is applied to various sizes of swarms through a dynamic neighbor selection procedure. Each robot's behavior depends only on position of three dynamically selected neighbors. An obstacle avoidance model is also introduced. Final, algorithm is studied with computational experiments which demonstrated that it is effective.

  16. Dimensional regularization of the path integral in curved space on an infinite time interval

    NASA Astrophysics Data System (ADS)

    Bastianelli, F.; Corradini, O.; van Nieuwenhuizen, P.

    2000-09-01

    We use dimensional regularization to evaluate quantum mechanical path integrals in arbitrary curved spaces on an infinite time interval. We perform 3-loop calculations in Riemann normal coordinates, and 2-loop calculations in general coordinates. It is shown that one only needs a covariant two-loop counterterm (VDR=ℏ2/8R) to obtain the same results as obtained earlier in other regularization schemes. It is also shown that the mass term needed in order to avoid infrared divergences explicitly breaks general covariance in the final result.

  17. A New 2-Dimensional Millimeter Wave Radiation Imaging System Based on Finite Difference Regularization

    NASA Astrophysics Data System (ADS)

    Zhu, Lu; Liu, Yuanyuan; Chen, Suhua; Hu, Fei; Chen, Zhizhang (David)

    2015-04-01

    Synthetic aperture imaging radiometer (SAIR) has the potential to meet the spatial resolution requirement of passive millimeter remote sensing from space. A new two-dimensional (2-D) imaging radiometer at millimeter wave (MMW) band is described in this paper; it uses a one-dimensional (1-D) synthetic aperture digital radiometer (SADR) to obtain an image on one dimension and a rotary platform to provide a scan on the second dimension. Due to the ill-posed inverse problem of SADR, we proposed a new reconstruction algorithm based on Finite Difference (FD) regularization to improve brightness temperature images. Experimental results show that the proposed 2-D MMW radiometer can give the brightness temperature images of natural scenes and the FD regularization reconstruction algorithm is able to improve the quality of brightness temperature images.

  18. Regularization strategy for an inverse problem for a 1 + 1 dimensional wave equation

    NASA Astrophysics Data System (ADS)

    Korpela, Jussi; Lassas, Matti; Oksanen, Lauri

    2016-06-01

    An inverse boundary value problem for a 1 + 1 dimensional wave equation with a wave speed c(x) is considered. We give a regularization strategy for inverting the map { A } :c\\mapsto {{Λ }}, where Λ is the hyperbolic Neumann-to-Dirichlet map corresponding to the wave speed c. That is, we consider the case when we are given a perturbation of the Neumann-to-Dirichlet map \\tilde{{{Λ }}}={{Λ }}+{ E }, where { E } corresponds to the measurement errors, and reconstruct an approximative wave speed \\tilde{c}. We emphasize that \\tilde{{{Λ }}} may not be in the range of the map { A }. We show that the reconstructed wave speed \\tilde{c} satisfies \\parallel \\tilde{c}-c\\parallel ≤slant C\\parallel { E }{\\parallel }1/54. Our regularization strategy is based on a new formula to compute c from Λ.

  19. Random packing of regular polygons and star polygons on a flat two-dimensional surface.

    PubMed

    Cieśla, Michał; Barbasz, Jakub

    2014-08-01

    Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks. PMID:25215737

  20. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  1. Dimensional regularization of local singularities in the fourth post-Newtonian two-point-mass Hamiltonian

    NASA Astrophysics Data System (ADS)

    Jaranowski, Piotr; Schäfer, Gerhard

    2013-04-01

    The article delivers the only still unknown coefficient in the 4th post-Newtonian energy expression for binary point masses on circular orbits as a function of orbital angular frequency. Apart from a single coefficient, which is known solely numerically, all the coefficients are given as exact numbers. The shown Hamiltonian is presented in the center-of-mass frame and out of its 57 coefficients, 51 are given fully explicitly. Those coefficients are six coefficients more than previously achieved [P. Jaranowski and G. Schäfer, Phys. Rev. D 86, 061503(R) (2012)PRVDAQ1550-7998]. The local divergences in the point-mass model are uniquely controlled by the method of dimensional regularization. As an application, the last stable circular orbit is determined as a function of the symmetric-mass-ratio parameter.

  2. Application of double-dimensional regularization in a nonabelian gauge theory

    SciTech Connect

    Karnaukhov, S.N.

    1986-04-01

    Calculations of the polarization operator and vertex function in a nonabelian gauge theory are performed in second order of perturbation theory on the basis of the method of I. V. Tyutin (JETP Lett. 35, 428 (1982)). In this calculation the formal contribution of the ghosts disappears, but the expressions for the polarization operator and vertex function are modified in such a way that this leads to automatic allowance for the contribution of the ghosts. For the gauge-invariant ..beta..-function the answer coincides with the known expression, but for the polarization operator and vertex function the dependence on the gauge parameter differs from that in standard calculations. It is shown that the calculations can be performed in the framework of dimensional regularization with a special choice of gauge condition.

  3. Computational methodology to determine fluid related parameters of non regular three-dimensional scaffolds.

    PubMed

    Acosta Santamaría, Víctor Andrés; Malvè, M; Duizabo, A; Mena Tobar, A; Gallego Ferrer, G; García Aznar, J M; Doblaré, M; Ochoa, I

    2013-11-01

    The application of three-dimensional (3D) biomaterials to facilitate the adhesion, proliferation, and differentiation of cells has been widely studied for tissue engineering purposes. The fabrication methods used to improve the mechanical response of the scaffold produce complex and non regular structures. Apart from the mechanical aspect, the fluid behavior in the inner part of the scaffold should also be considered. Parameters such as permeability (k) or wall shear stress (WSS) are important aspects in the provision of nutrients, the removal of metabolic waste products or the mechanically-induced differentiation of cells attached in the trabecular network of the scaffolds. Experimental measurements of these parameters are not available in all labs. However, fluid parameters should be known prior to other types of experiments. The present work compares an experimental study with a computational fluid dynamics (CFD) methodology to determine the related fluid parameters (k and WSS) of complex non regular poly(L-lactic acid) scaffolds based only on the treatment of microphotographic images obtained with a microCT (μCT). The CFD analysis shows similar tendencies and results with low relative difference compared to those of the experimental study, for high flow rates. For low flow rates the accuracy of this prediction reduces. The correlation between the computational and experimental results validates the robustness of the proposed methodology. PMID:23807712

  4. Polyakov loop and correlator of Polyakov loops at next-to-next-to-leading order

    SciTech Connect

    Brambilla, Nora; Vairo, Antonio; Ghiglieri, Jacopo; Petreczky, Peter

    2010-10-01

    We study the Polyakov loop and the correlator of two Polyakov loops at finite temperature in the weak-coupling regime. We calculate the Polyakov loop at order g{sup 4}. The calculation of the correlator of two Polyakov loops is performed at distances shorter than the inverse of the temperature and for electric screening masses larger than the Coulomb potential. In this regime, it is accurate up to order g{sup 6}. We also evaluate the Polyakov-loop correlator in an effective field theory framework that takes advantage of the hierarchy of energy scales in the problem and makes explicit the bound-state dynamics. In the effective field theory framework, we show that the Polyakov-loop correlator is at leading order in the multipole expansion the sum of a color-singlet and a color-octet quark-antiquark correlator, which are gauge invariant, and compute the corresponding color-singlet and color-octet free energies.

  5. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  6. Renormalization of the Polyakov loop with gradient flow

    NASA Astrophysics Data System (ADS)

    Petreczky, P.; Schadler, H.-P.

    2015-11-01

    We use the gradient flow for the renormalization of the Polyakov loop in various representations. Using 2 +1 flavor QCD with highly improved staggered quarks and lattices with temporal extents of Nτ=6 , 8, 10 and 12 we calculate the renormalized Polyakov loop in many representations including fundamental, sextet, adjoint, decuplet, 15-plet, 24-plet and 27-plet. This approach allows for the calculations of the renormalized Polyakov loops over a large temperature range from T =116 MeV up to T =815 MeV , with small errors not only for the Polyakov loop in fundamental representation, but also for the Polyakov loops in higher representations. We compare our results with standard renormalization schemes and discuss the Casimir scaling of the Polyakov loops.

  7. Inhomogeneous Polyakov loop induced by inhomogeneous chiral condensates

    NASA Astrophysics Data System (ADS)

    Hayata, Tomoya; Yamamoto, Arata

    2015-05-01

    We study the spatial inhomogeneity of the Polyakov loop induced by inhomogeneous chiral condensates. We formulate an effective model of gluons on the background fields of chiral condensates, and perform its lattice simulation. On the background of inhomogeneous chiral condensates, the Polyakov loop exhibits an in-phase spatial oscillation with the chiral condensates. We also analyze the heavy quark potential and show that the inhomogeneous Polyakov loop indicates the inhomogeneous confinement of heavy quarks.

  8. Renormalization group treatment of polymer excluded volume by t'Hooft-Veltman-type dimensional regularization

    NASA Astrophysics Data System (ADS)

    Kholodenko, A. L.; Freed, Karl F.

    1983-06-01

    The chain conformation space renormalization group method is transformed into a representation where the t'Hooft-Veltman method of dimensional regularization can directly be applied to problems involving polymer excluded volume. This t'Hooft-Veltman-type representation enables a comparison to be made with other direct renormalization methods for polymer excluded volume. In contrast to the latter, the current method and the chain conformation one from which it is derived are not restricted to the asymptotic limit of very long chains and do not require the cumbersome use of insertions to calculate the relevant exponents. Furthermore, the theory emerges directly in polymer language from the traditional excluded volume perturbation expansion which provides the correct weight factors for the diagrams. Special attention is paid to the general diagrammatic structure of the theory and to the renormalization prescription in order that this prescription follows from considerations on measurable quantities. The theory is illustrated by calculation of the mean square end-to-end distance and second virial coefficient to second order including the full crossover dependence on the renormalized strength of the excluded volume interaction and on the chain length. A subsequent paper provides the generalization of the theory to the treatment of excluded volume effects in polyelectrolytes.

  9. Globally regular instability of 3-dimensional anti-de Sitter spacetime.

    PubMed

    Bizoń, Piotr; Jałmużna, Joanna

    2013-07-26

    We consider three-dimensional anti-de Sitter (AdS) gravity minimally coupled to a massless scalar field and study numerically the evolution of small smooth circularly symmetric perturbations of the AdS3 spacetime. As in higher dimensions, for a large class of perturbations, we observe a turbulent cascade of energy to high frequencies which entails instability of AdS3. However, in contrast to higher dimensions, the cascade cannot be terminated by black hole formation because small perturbations have energy below the black hole threshold. This situation appears to be challenging for the cosmic censor. Analyzing the energy spectrum of the cascade we determine the width ρ(t) of the analyticity strip of solutions in the complex spatial plane and argue by extrapolation that ρ(t) does not vanish in finite time. This provides evidence that the turbulence is too weak to produce a naked singularity and the solutions remain globally regular in time, in accordance with the cosmic censorship hypothesis. PMID:23931347

  10. Accelerated motion corrected three‐dimensional abdominal MRI using total variation regularized SENSE reconstruction

    PubMed Central

    Atkinson, David; Buerger, Christian; Schaeffter, Tobias; Prieto, Claudia

    2015-01-01

    Purpose Develop a nonrigid motion corrected reconstruction for highly accelerated free‐breathing three‐dimensional (3D) abdominal images without external sensors or additional scans. Methods The proposed method accelerates the acquisition by undersampling and performs motion correction directly in the reconstruction using a general matrix description of the acquisition. Data are acquired using a self‐gated 3D golden radial phase encoding trajectory, enabling a two stage reconstruction to estimate and then correct motion of the same data. In the first stage total variation regularized iterative SENSE is used to reconstruct highly undersampled respiratory resolved images. A nonrigid registration of these images is performed to estimate the complex motion in the abdomen. In the second stage, the estimated motion fields are incorporated in a general matrix reconstruction, which uses total variation regularization and incorporates k‐space data from multiple respiratory positions. The proposed approach was tested on nine healthy volunteers and compared against a standard gated reconstruction using measures of liver sharpness, gradient entropy, visual assessment of image sharpness and overall image quality by two experts. Results The proposed method achieves similar quality to the gated reconstruction with nonsignificant differences for liver sharpness (1.18 and 1.00, respectively), gradient entropy (1.00 and 1.00), visual score of image sharpness (2.22 and 2.44), and visual rank of image quality (3.33 and 3.39). An average reduction of the acquisition time from 102 s to 39 s could be achieved with the proposed method. Conclusion In vivo results demonstrate the feasibility of the proposed method showing similar image quality to the standard gated reconstruction while using data corresponding to a significantly reduced acquisition time. Magn Reson Med, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of

  11. Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis

    NASA Astrophysics Data System (ADS)

    Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.

    2005-03-01

    The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..

  12. Two-dimensional encoder with picometre resolution using lattice spacing on regular crystalline surface as standard

    NASA Astrophysics Data System (ADS)

    Aketagawa, Masato; Honda, Hiroshi; Ishige, Masashi; Patamaporn, Chaikool

    2007-02-01

    A two-dimensional (2D) encoder with picometre resolution using multi-tunnelling-probes scanning tunnelling microscope (MTP-STM) as detector units and a regular crystalline lattice as a reference is proposed. In experiments to demonstrate the method, a highly oriented pyrolytic graphite (HOPG) crystal is utilized as the reference. The MTP-STM heads, which are set upon a sample stage, observe multi-points which satisfy some relationship on the HOPG crystalline surface on the sample stage, and the relative 2D displacement between the MTP-STM heads and the sample stage can be determined from the multi-current signals of the multi-points. Two unit lattice vectors on the HOPG crystalline surface with length and intersection angle of 0.246 nm and 60°, respectively, are utilized as 2D displacement references. 2D displacement of the sample stage on which the HOPG crystal is placed can be calculated using the linear sum of the two unit lattice vectors, derived from a linear operation of the multi-current signals. Displacement interpolation less than the lattice spacing of the HOPG crystal can also be performed. To determine the linear sum of the two unit vectors as the 2D displacement, the multi-points to be observed with the MTP-STM must be properly positioned according to the 2D atomic structure of the HOPG crystal. In the experiments, the proposed method is compared with a capacitance sensor whose resolution is improved to approximately 0.1 nm by limiting the sensor's bandwidth to 300 Hz. In order to obtain suitable multi-current signals of the properly positioned multi-points in semi-real-time, lateral dither modulations are applied to the STM probes. The results show that the proposed method has the capability to measure 2D lateral displacements with a resolution on the order of 10 pm with a maximum measurement speed of 100 nm s-1 or more.

  13. Effect of the Gribov horizon on the Polyakov loop and vice versa

    NASA Astrophysics Data System (ADS)

    Canfora, F. E.; Dudal, D.; Justo, I. F.; Pais, P.; Rosa, L.; Vercauteren, D.

    2015-07-01

    We consider finite-temperature SU(2) gauge theory in the continuum formulation, which necessitates the choice of a gauge fixing. Choosing the Landau gauge, the existing gauge copies are taken into account by means of the Gribov-Zwanziger quantization scheme, which entails the introduction of a dynamical mass scale (Gribov mass) directly influencing the Green functions of the theory. Here, we determine simultaneously the Polyakov loop (vacuum expectation value) and Gribov mass in terms of temperature, by minimizing the vacuum energy w.r.t. the Polyakov-loop parameter and solving the Gribov gap equation. Inspired by the Casimir energy-style of computation, we illustrate the usage of Zeta function regularization in finite-temperature calculations. Our main result is that the Gribov mass directly feels the deconfinement transition, visible from a cusp occurring at the same temperature where the Polyakov loop becomes nonzero. In this exploratory work we mainly restrict ourselves to the original Gribov-Zwanziger quantization procedure in order to illustrate the approach and the potential direct link between the vacuum structure of the theory (dynamical mass scales) and (de)confinement. We also present a first look at the critical temperature obtained from the refined Gribov-Zwanziger approach. Finally, a particular problem for the pressure at low temperatures is reported.

  14. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    SciTech Connect

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou, “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].

  15. Memory-efficient iterative process on a two-dimensional first-order regular graph.

    PubMed

    Park, S C; Jeong, H

    2008-01-01

    We present a parallel and memory-efficient iterative algorithm based on 2D first-order regular graphs. For an M x N regular graph with L iterations, a carefully chosen computation order can reduce the memory resources from O(MN) to O(ML). This scheme can achieve a memory reduction of 4 to 27 times in typical computation-intensive problems such as stereo and motion. PMID:18157263

  16. Polyakov loop fluctuations in the Dirac eigenmode expansion

    NASA Astrophysics Data System (ADS)

    Doi, Takahiro M.; Redlich, Krzysztof; Sasaki, Chihiro; Suganuma, Hideo

    2015-11-01

    We investigate correlations of the Polyakov loop fluctuations with eigenmodes of the lattice Dirac operator. Their analytic relations are derived on the temporally odd-number size lattice with the normal nontwisted periodic boundary condition for the link variables. We find that the low-lying Dirac modes yield negligible contributions to the Polyakov loop fluctuations. This property is confirmed to be valid in confined and deconfined phases by numerical simulations in SU(3) quenched QCD. These results indicate that there is no direct, one-to-one correspondence between confinement and chiral symmetry breaking in QCD in the context of different properties of the Polyakov loop fluctuation ratios.

  17. Vertical profiles of microphysical particle properties derived from inversion with two-dimensional regularization of multiwavelength Raman lidar data: experiment.

    PubMed

    Müller, Detlef; Kolgotin, Alexei; Mattis, Ina; Petzold, Andreas; Stohl, Andreas

    2011-05-10

    Inversion with two-dimensional (2-D) regularization is a new methodology that can be used for the retrieval of profiles of microphysical properties, e.g., effective radius and complex refractive index of atmospheric particles from complete (or sections) of profiles of optical particle properties. The optical profiles are acquired with multiwavelength Raman lidar. Previous simulations with synthetic data have shown advantages in terms of retrieval accuracy compared to our so-called classical one-dimensional (1-D) regularization, which is a method mostly used in the European Aerosol Research Lidar Network (EARLINET). The 1-D regularization suffers from flaws such as retrieval accuracy, speed, and ability for error analysis. In this contribution, we test for the first time the performance of the new 2-D regularization algorithm on the basis of experimental data. We measured with lidar an aged biomass-burning plume over West/Central Europe. For comparison, we use particle in situ data taken in the smoke plume during research aircraft flights upwind of the lidar. We find good agreement for effective radius and volume, surface-area, and number concentrations. The retrieved complex refractive index on average is lower than what we find from the in situ observations. Accordingly, the single-scattering albedo that we obtain from the inversion is higher than what we obtain from the aircraft data. In view of the difficult measurement situation, i.e., the large spatial and temporal distances between aircraft and lidar measurements, this test of our new inversion methodology is satisfactory. PMID:21556108

  18. Polyakov loop and gluon quasiparticles in Yang-Mills thermodynamics

    NASA Astrophysics Data System (ADS)

    Ruggieri, M.; Alba, P.; Castorina, P.; Plumari, S.; Ratti, C.; Greco, V.

    2012-09-01

    We study the interpretation of lattice data about the thermodynamics of the deconfinement phase of SU(3) Yang-Mills theory, in terms of gluon quasiparticles propagating in a background of a Polyakov loop. A potential for the Polyakov loop, inspired by the strong coupling expansion of the QCD action, is introduced; the Polyakov loop is coupled to transverse gluon quasiparticles by means of a gaslike effective potential. This study is useful to identify the effective degrees of freedom propagating in the gluon medium above the critical temperature. A main general finding is that a dominant part of the phase transition dynamics is accounted for by the Polyakov loop dynamics; hence, the thermodynamics can be described without the need for diverging or exponentially increasing quasiparticle masses as T→Tc, at variance respect to standard quasiparticle models.

  19. Fast ultrasound beam prediction for linear and regular two-dimensional arrays.

    PubMed

    Hlawitschka, Mario; McGough, Robert J; Ferrara, Katherine W; Kruse, Dustin E

    2011-09-01

    Real-time beam predictions are highly desirable for the patient-specific computations required in ultrasound therapy guidance and treatment planning. To address the longstanding issue of the computational burden associated with calculating the acoustic field in large volumes, we use graphics processing unit (GPU) computing to accelerate the computation of monochromatic pressure fields for therapeutic ultrasound arrays. In our strategy, we start with acceleration of field computations for single rectangular pistons, and then we explore fast calculations for arrays of rectangular pistons. For single-piston calculations, we employ the fast near-field method (FNM) to accurately and efficiently estimate the complex near-field wave patterns for rectangular pistons in homogeneous media. The FNM is compared with the Rayleigh-Sommerfeld method (RSM) for the number of abscissas required in the respective numerical integrations to achieve 1%, 0.1%, and 0.01% accuracy in the field calculations. Next, algorithms are described for accelerated computation of beam patterns for two different ultrasound transducer arrays: regular 1-D linear arrays and regular 2-D linear arrays. For the array types considered, the algorithm is split into two parts: 1) the computation of the field from one piston, and 2) the computation of a piston-array beam pattern based on a pre-computed field from one piston. It is shown that the process of calculating an array beam pattern is equivalent to the convolution of the single-piston field with the complex weights associated with an array of pistons. Our results show that the algorithms for computing monochromatic fields from linear and regularly spaced arrays can benefit greatly from GPU computing hardware, exceeding the performance of an expensive CPU by more than 100 times using an inexpensive GPU board. For a single rectangular piston, the FNM method facilitates volumetric computations with 0.01% accuracy at rates better than 30 ns per field point

  20. Connecting Polyakov loops to the thermodynamics of SU(Nc) gauge theories using the gauge-string duality

    NASA Astrophysics Data System (ADS)

    Noronha, Jorge

    2010-02-01

    We show that in four-dimensional gauge theories dual to five-dimensional Einstein gravity coupled to a single scalar field in the bulk, the derivative of the single heavy quark free energy in the deconfined phase is dFQ(T)/dT˜-1/cs2(T), where cs(T) is the speed of sound. This general result provides a direct link between the softest point in the equation of state of strongly-coupled plasmas and the deconfinement phase transition described by the expectation value of the Polyakov loop. We give an explicit example of a gravity dual with black hole solutions that can reproduce the lattice results for the expectation value of the Polyakov loop and the thermodynamics of SU(3) Yang-Mills theory in the (nonperturbative) temperature range between Tc and 3Tc.

  1. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  2. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    NASA Astrophysics Data System (ADS)

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-09-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network.

  3. Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions

    NASA Astrophysics Data System (ADS)

    Leiderman, Karin; Olson, Sarah D.

    2016-02-01

    The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.

  4. Uniform Regularity and Vanishing Dissipation Limit for the Full Compressible Navier-Stokes System in Three Dimensional Bounded Domain

    NASA Astrophysics Data System (ADS)

    Wang, Yong

    2016-09-01

    In the present paper, we study the uniform regularity and vanishing dissipation limit for the full compressible Navier-Stokes system whose viscosity and heat conductivity are allowed to vanish at different orders. The problem is studied in a three dimensional bounded domain with Navier-slip type boundary conditions. It is shown that there exists a unique strong solution to the full compressible Navier-Stokes system with the boundary conditions in a finite time interval which is independent of the viscosity and heat conductivity. The solution is uniformly bounded in {W^{1,infty}} and is a conormal Sobolev space. Based on such uniform estimates, we prove the convergence of the solutions of the full compressible Navier-Stokes to the corresponding solutions of the full compressible Euler system in {L^infty(0,T; L^2)}, {L^infty(0,T; H1)} and {L^infty([0,T]×Ω)} with a rate of convergence.

  5. Visualizations of coherent center domains in local Polyakov loops

    SciTech Connect

    Stokes, Finn M. Kamleh, Waseem; Leinweber, Derek B.

    2014-09-15

    Quantum Chromodynamics exhibits a hadronic confined phase at low to moderate temperatures and, at a critical temperature T{sub C}, undergoes a transition to a deconfined phase known as the quark–gluon plasma. The nature of this deconfinement phase transition is probed through visualizations of the Polyakov loop, a gauge independent order parameter. We produce visualizations that provide novel insights into the structure and evolution of center clusters. Using the HMC algorithm the percolation during the deconfinement transition is observed. Using 3D rendering of the phase and magnitude of the Polyakov loop, the fractal structure and correlations are examined. The evolution of the center clusters as the gauge fields thermalize from below the critical temperature to above it are also exposed. We observe deconfinement proceeding through a competition for the dominance of a particular center phase. We use stout-link smearing to remove small-scale noise in order to observe the large-scale evolution of the center clusters. A correlation between the magnitude of the Polyakov loop and the proximity of its phase to one of the center phases of SU(3) is evident in the visualizations. - Highlights: • We produce visualizations of center clusters in Polyakov loops. • The evolution of center clusters with HMC simulation time is examined. • Visualizations provide novel insights into the percolation of center clusters. • The magnitude and phase of the Polyakov loop are studied. • A correlation between the magnitude and center phase proximity is evident.

  6. Regularization Method for Predicting an Ordinal Response Using Longitudinal High-dimensional Genomic Data

    PubMed Central

    Hou, Jiayi

    2015-01-01

    An ordinal scale is commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical methodology based on statistical inference, in particular, ordinal modeling has contributed to the analysis of data in which the response categories are ordered and the number of covariates (p) remains smaller than the sample size (n). With the emergence of genomic technologies being increasingly applied for more accurate diagnosis and prognosis, high-dimensional data where the number of covariates (p) is much larger than the number of samples (n), are generated. To meet the emerging needs, we introduce our proposed model which is a two-stage algorithm: Extend the Generalized Monotone Incremental Forward Stagewise (GMIFS) method to the cumulative logit ordinal model; and combine the GMIFS procedure with the classical mixed-effects model for classifying disease status in disease progression along with time. We demonstrate the efficiency and accuracy of the proposed models in classification using a time-course microarray dataset collected from the Inflammation and the Host Response to Injury study. PMID:25720102

  7. Simplicial pseudorandom lattice study of a three-dimensional Abelian gauge model, the regular lattice as an extremum of the action

    SciTech Connect

    Pertermann, D.; Ranft, J.

    1986-09-15

    We introduce a simplicial pseudorandom version of lattice gauge theory. In this formulation it is possible to interpolate continuously between a regular simplicial lattice and a pseudorandom lattice. Using this method we study a simple three-dimensional Abelian lattice gauge theory. Calculating average plaquette expectation values, we find an extremum of the action for our regular simplicial lattice. Such a behavior was found in analytical studies in one and two dimensions.

  8. Seiberg-Witten and 'Polyakov-like' Magnetic Bion Confinements are Continuously Connected

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2012-06-01

    We study four-dimensional N = 2 supersymmetric pure-gauge (Seiberg-Witten) theory and its N = 1 mass perturbation by using compactification on S{sup 1} x R{sup 3}. It is well known that on R{sup 4} (or at large S{sup 1} size L) the perturbed theory realizes confinement through monopole or dyon condensation. At small S{sup 1}, we demonstrate that confinement is induced by a generalization of Polyakov's three-dimensional instanton mechanism to a locally four-dimensional theory - the magnetic bion mechanism - which also applies to a large class of nonsupersymmetric theories. Using a large- vs. small-L Poisson duality, we show that the two mechanisms of confinement, previously thought to be distinct, are in fact continuously connected.

  9. RENORMALIZATION OF POLYAKOV LOOPS IN FUNDAMENTAL AND HIGHER REPRESENTATIONS

    SciTech Connect

    KACZMAREK,O.; GUPTA, S.; HUEBNER, K.

    2007-07-30

    We compare two renormalization procedures, one based on the short distance behavior of heavy quark-antiquark free energies and the other by using bare Polyakov loops at different temporal entent of the lattice and find that both prescriptions are equivalent, resulting in renormalization constants that depend on the bare coupling. Furthermore these renormalization constants show Casimir scaling for higher representations of the Polyakov loops. The analysis of Polyakov loops in different representations of the color SU(3) group indicates that a simple perturbative inspired relation in terms of the quadratic Casimir operator is realized to a good approximation at temperatures T{approx}>{Tc}, for renormalized as well as bare loops. In contrast to a vanishing Polyakov loop in representations with non-zero triality in the confined phase, the adjoint loops are small but non-zero even for temperatures below the critical one. The adjoint quark-antiquark pairs exhibit screening. This behavior can be related to the binding energy of glue-lump states.

  10. Polyakov loop, hadron resonance gas model and thermodynamics of QCD

    SciTech Connect

    Megías, E.; Arriola, E. Ruiz; Salcedo, L. L.

    2014-11-11

    We summarize recent results on the hadron resonance gas description of QCD. In particular, we apply this approach to describe the equation of state and the vacuum expectation value of the Polyakov loop in several representations. Ambiguities related to exactly which states should be included are discussed.

  11. The Polyakov relation for the sphere and higher genus surfaces

    NASA Astrophysics Data System (ADS)

    Menotti, Pietro

    2016-05-01

    The Polyakov relation, which in the sphere topology gives the changes of the Liouville action under the variation of the position of the sources, is also related in the case of higher genus to the dependence of the action on the moduli of the surface. We write and prove such a relation for genus 1 and for all hyperelliptic surfaces.

  12. Polyakov loop of antisymmetric representations as a quantum impurity model

    SciTech Connect

    Mueck, Wolfgang

    2011-03-15

    The Polyakov loop of an operator in the antisymmetric representation in N=4 supersymmetric Yang-Mills theory on spacial R{sup 3} is calculated, to leading order in 1/N and at large 't Hooft coupling, by solving the saddle point equations of the corresponding quantum impurity model. Agreement is found with previous results from the supergravity dual, which is given by a D5-brane in an asymptotically AdS{sub 5}xS{sup 5} black brane background. It is shown that the azimuth angle, at which the dual D5-brane wraps the S{sup 5}, is related to the spectral asymmetry angle in the spectral density associated with the Green's function of the impurity fermions. Much of the calculation also applies to the Polyakov loop on spacial S{sup 3} or H{sup 3}.

  13. Polyakov loop at next-to-next-to-leading order

    NASA Astrophysics Data System (ADS)

    Berwein, Matthias; Brambilla, Nora; Petreczky, Péter; Vairo, Antonio

    2016-02-01

    We calculate the next-to-next-to-leading correction to the expectation value of the Polyakov loop or equivalently to the free energy of a static charge. This correction is of order g5 . We show that up to this order the free energy of the static charge is proportional to the quadratic Casimir operator of the corresponding representation. We also compare our perturbative result with the most recent lattice results in SU(3) gauge theory.

  14. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and

  15. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    NASA Astrophysics Data System (ADS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T.; Cooper, Benjamin J.; Kuncic, Zdenka; Keall, Paul J.

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and did

  16. Phase diagram and nucleation in the Polyakov-loop-extended quark-meson truncation of QCD with the unquenched Polyakov-loop potential

    NASA Astrophysics Data System (ADS)

    Stiele, Rainer; Schaffner-Bielich, Jürgen

    2016-05-01

    The unquenching of the Polyakov-loop potential has been shown to be an important improvement for the description of the phase structure and thermodynamics of strongly interacting matter at zero quark chemical potentials with Polyakov-loop-extended chiral models. This work constitutes the first application of the quark backreaction on the Polyakov-loop potential at nonzero density. The observation is that it links the chiral and deconfinement phase transitions also at small temperatures and large quark chemical potentials. The build-up of the surface tension in the Polyakov-loop-extended quark-meson model is explored by investigating the two- and 2 +1 -flavor quark-meson model and analyzing the impact of the Polyakov-loop extension. In general, the order of magnitude of the surface tension is given by the chiral phase transition. The coupling of the chiral and deconfinement transitions with the unquenched Polyakov-loop potential leads to the fact that the Polyakov loop contributes at all temperatures.

  17. Scaling behavior of regularized bosonic strings

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Makeenko, Y.

    2016-03-01

    We implement a proper-time UV regularization of the Nambu-Goto string, introducing an independent metric tensor and the corresponding Lagrange multiplier, and treating them in the mean-field approximation justified for long strings and/or when the dimension of space-time is large. We compute the regularized determinant of the 2D Laplacian for the closed string winding around a compact dimension, obtaining in this way the effective action, whose minimization determines the energy of the string ground state in the mean-field approximation. We discuss the existence of two scaling limits when the cutoff is taken to infinity. One scaling limit reproduces the results obtained by the hypercubic regularization of the Nambu-Goto string as well as by the use of the dynamical triangulation regularization of the Polyakov string. The other scaling limit reproduces the results obtained by canonical quantization of the Nambu-Goto string.

  18. From chiral quark dynamics with Polyakov loop to the hadron resonance gas model

    SciTech Connect

    Arriola, E. R.; Salcedo, L. L.; Megias, E.

    2013-03-25

    Chiral quark models with Polyakov loop at finite temperature have been often used to describe the phase transition. We show how the transition to a hadron resonance gas is realized based on the quantum and local nature of the Polyakov loop.

  19. QCD at zero baryon density and the Polyakov loop paradox

    SciTech Connect

    Kratochvila, Slavo; Forcrand, Philippe de

    2006-06-01

    We compare the grand-canonical partition function at fixed chemical potential {mu} with the canonical partition function at fixed baryon number B, formally and by numerical simulations at {mu}=0 and B=0 with four flavors of staggered quarks. We verify that the free energy densities are equal in the thermodynamic limit, and show that they can be well described by the hadron resonance gas at TT{sub c}. Small differences between the two ensembles, for thermodynamic observables characterizing the deconfinement phase transition, vanish with increasing lattice size. These differences are solely caused by contributions of nonzero baryon density sectors, which are exponentially suppressed with increasing volume. The Polyakov loop shows a different behavior: for all temperatures and volumes, its expectation value is exactly zero in the canonical formulation, whereas it is always nonzero in the commonly used grand-canonical formulation. We clarify this paradoxical difference, and show that the nonvanishing Polyakov loop expectation value is due to contributions of nonzero triality states, which are not physical, because they give zero contribution to the partition function.

  20. Fuzzy bags, Polyakov loop and gauge/string duality

    NASA Astrophysics Data System (ADS)

    Zuo, Fen

    2014-11-01

    Confinement in SU(N) gauge theory is due to the linear potential between colored objects. At short distances, the linear contribution could be considered as the quadratic correction to the leading Coulomb term. Recent lattice data show that such quadratic corrections also appear in the deconfined phase, in both the thermal quantities and the Polyakov loop. These contributions are studied systematically employing the gauge/string duality. "Confinement" in N = 4 SU(N) Super Yang-Mills (SYM) theory could be achieved kinematically when the theory is defined on a compact space manifold. In the large-N limit, deconfinement of N = 4 SYM on {{Bbb S}^3} at strong coupling is dual to the Hawking-Page phase transition in the global Anti-de Sitter spacetime. Meantime, all the thermal quantities and the Polyakov loop achieve significant quadratic contributions. Similar results can also be obtained at weak coupling. However, when confinement is induced dynamically through the local dilaton field in the gravity-dilaton system, these contributions can not be generated consistently. This is in accordance with the fact that there is no dimension-2 gauge-invariant operator in the boundary gauge theory. Based on these results, we suspect that quadratic corrections, and also confinement, should be due to global or non-local effects in the bulk spacetime.

  1. Numerical corrections to the strong coupling effective Polyakov-line action for finite T Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Bergner, G.; Langelage, J.; Philipsen, O.

    2015-11-01

    We consider a three-dimensional effective theory of Polyakov lines derived previously from lattice Yang-Mills theory and QCD by means of a resummed strong coupling expansion. The effective theory is useful for investigations of the phase structure, with a sign problem mild enough to allow simulations also at finite density. In this work we present a numerical method to determine improved values for the effective couplings directly from correlators of 4d Yang-Mills theory. For values of the gauge coupling up to the vicinity of the phase transition, the dominant short range effective coupling are well described by their corresponding strong coupling series. We provide numerical results also for the longer range interactions, Polyakov lines in higher representations as well as four-point interactions, and discuss the growing significance of non-local contributions as the lattice gets finer. Within this approach the critical Yang-Mills coupling β c is reproduced to better than one percent from a one-coupling effective theory on N τ = 4 lattices while up to five couplings are needed on N τ = 8 for the same accuracy.

  2. Entropy-based viscous regularization for the multi-dimensional Euler equations in low-Mach and transonic flows

    SciTech Connect

    Marc O Delchini; Jean E. Ragusa; Ray A. Berry

    2015-07-01

    We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.

  3. Dual quark condensate in the Polyakov-loop extended Nambu-Jona-Lasinio model

    SciTech Connect

    Kashiwa, Kouji; Yahiro, Masanobu; Kouno, Hiroaki

    2009-12-01

    The dual quark condensate {sigma}{sup (n)} proposed recently as a new order parameter of the spontaneous breaking of the Z{sub 3} symmetry are evaluated by the Polyakov-loop extended Nambu-Jona-Lasinio (PNJL) model, where n are winding numbers. The Polyakov-loop extended Nambu-Jona-Lasinio model well reproduces lattice QCD data on {sigma}{sup (1)} measured very lately. The dual quark condensate {sigma}{sup (n)} at higher temperatures is sensitive to the strength of the vector-type four-quark interaction in the Polyakov-loop extended Nambu-Jona-Lasinio model and hence a good quantity to determine the strength.

  4. Regularity of the Rotation Number for the One-Dimensional Time-Continuous Schrödinger Equation

    NASA Astrophysics Data System (ADS)

    Amor, Sana Hadj

    2012-12-01

    Starting from results already obtained for quasi-periodic co-cycles in SL(2, R), we show that the rotation number of the one-dimensional time-continuous Schrödinger equation with Diophantine frequencies and a small analytic potential has the behavior of a 1/2-Hölder function. We give also a sub-exponential estimate of the length of the gaps which depends on its label given by the gap-labeling theorem.

  5. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    NASA Astrophysics Data System (ADS)

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  6. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  7. Regular and Chaotic Ray and Wave Mappings for Two and Three-Dimensional Systems with Applications to a Periodically Perturbed Waveguide.

    NASA Astrophysics Data System (ADS)

    Ratowsky, Ricky Paul

    We investigate quantum or wave dynamics for a system which is stochastic in the classical or eikonal (ray) limit. This system is a mapping which couples the standard mapping to an additional degree of freedom. We observe numerically, in most but not all cases, the asymptotic (in time) limitation of diffusion in the classically strongly chaotic regime, and the inhibition of Arnold diffusion when there exist KAM surfaces classically. We present explicitly the two-dimensional asymptotic localized distributions for each case, when they exist. The scaling of the characteristic widths of the localized distributions with coupling strength has been determined. A simple model accounts for the observed behavior in the limit of weak coupling, and we derive a scaling law for the diffusive time scale in the system. We explore some implications of the wave mapping for a class of optical or acoustical systems: a parallel plate waveguide or duct with a periodically perturbed boundary (a grating), and a lens waveguide with nonlinear focusing elements. We compute the ray trajectories of each system, using a Poincare surface of section to study the dynamics. Each system leads to a near-integrable ray Hamiltonian: the study the dynamics. Each system leads to a near-integrable ray Hamiltonian: the phase space splits into regions showing regular or chaotic behavior. The solutions to the scalar Helmholtz equation are found via a secular equation determining the eigenfrequencies. A wave mapping is derived for the system in the paraxial regime. We find that localization should occur, limiting the beam spread in both wavevector and configuration space. In addition, we consider the effect of retaining higher order terms in the paraxial expansion. Although we focus largely on the two dimensional case, we make some remarks concerning the four dimensional mapping for this system.

  8. Nontopological soliton in the Polyakov quark-meson model

    NASA Astrophysics Data System (ADS)

    Jin, Jinshuang; Mao, Hong

    2016-01-01

    Within a mean-field approximation, we study a nontopological soliton solution of the Polyakov quark-meson model in the presence of a fermionic vacuum term with two flavors at finite temperature and density. The profile of the effective potential exhibits a stable soliton solution below a critical temperature T ≤Tχc for both the crossover and the first-order phase transitions, and these solutions are calculated here with appropriate boundary conditions. However, it is found that only if T ≤Tdc is the energy of the soliton MN less than the energy of the three free constituent quarks 3 Mq . As T >Tdc , there is an instant delocalization phase transition from hadron matter to quark matter. The phase diagram together with the location of a critical end point has been obtained in the T and μ plane. We notice that two critical temperatures always satisfy Tdc≤Tχc . Finally, we present and compare the result of thermodynamic pressure at zero chemical potential with lattice data.

  9. Polyakov loop and the hadron resonance gas model.

    PubMed

    Megías, E; Arriola, E Ruiz; Salcedo, L L

    2012-10-12

    The Polyakov loop has been used repeatedly as an order parameter in the deconfinement phase transition in QCD. We argue that, in the confined phase, its expectation value can be represented in terms of hadronic states, similarly to the hadron resonance gas model for the pressure. Specifically, L(T)≈1/2[∑(α)g(α)e(-Δ(α)/T), where g(α) are the degeneracies and Δ(α) are the masses of hadrons with exactly one heavy quark (the mass of the heavy quark itself being subtracted). We show that this approximate sum rule gives a fair description of available lattice data with N(f)=2+1 for temperatures in the range 150 MeV

  10. The effect of the Polyakov loop on the chiral phase transition

    NASA Astrophysics Data System (ADS)

    Markó, G.; Szép, Zs.

    2011-04-01

    The Polyakov loop is included in the S U(2)L × S U(2)R chiral quark-meson model by considering the propagation of the constituent quarks, coupled to the (σ, π) meson multiplet, on the homogeneous background of a temporal gauge field, diagonal in color space. The model is solved at finite temperature and quark baryon chemical potential both in the chiral limit and for the physical value of the pion mass by using an expansion in the number of flavors Nf. Keeping the fermion propagator at its tree-level, a resummation on the pion propagator is constructed which resums infinitely many orders in 1/Nf, where O(1/Nf) represents the order at which the fermions start to contribute in the pion propagator. The influence of the Polyakov loop on the tricritical or the critical point in the µq - T phase diagram is studied for various forms of the Polyakov loop potential.

  11. The Polyakov loop correlator at NNLO and singlet and octet correlators

    SciTech Connect

    Ghiglieri, Jacopo

    2011-05-23

    We present the complete next-to-next-to-leading-order calculation of the correlation function of two Polyakov loops for temperatures smaller than the inverse distance between the loops and larger than the Coulomb potential. We discuss the relationship of this correlator with the singlet and octet potentials which we obtain in an Effective Field Theory framework based on finite-temperature potential Non-Relativistic QCD, showing that the Polyakov loop correlator can be re-expressed, at the leading order in a multipole expansion, as a sum of singlet and octet contributions. We also revisit the calculation of the expectation value of the Polyakov loop at next-to-next-to-leading order.

  12. Propagator, sewing rules, and vacuum amplitude for the Polyakov point particles with ghosts

    SciTech Connect

    Giannakis, I.; Ordonez, C.R.; Rubin, M.A.; Zucchini, R.

    1989-01-01

    The authors apply techniques developed for strings to the case of the spinless point particle. The Polyakov path integral with ghosts is used to obtain the propagator and one-loop vacuum amplitude. The propagator is shown to correspond to the Green's function for the BRST field theory in Siegel gauge. The reparametrization invariance of the Polyakov path integral is shown to lead automatically to the correct trace log result for the one-loop diagram, despite the fact that naive sewing of the ends of a propagator would give an incorrect answer. This type of failure of naive sewing is identical to that found in the string case. The present treatment provides, in the simplified context of the point particle, a pedagogical introduction to Polyakov path integral methods with and without ghosts.

  13. Polyakov-loop suppression of colored states in a quark-meson-diquark plasma

    NASA Astrophysics Data System (ADS)

    Blaschke, D.; Dubinin, A.; Buballa, M.

    2015-06-01

    A quark-meson-diquark plasma is considered within the Polyakov-loop extended Nambu-Jona-Lasinio model for dynamical chiral symmetry breaking and restoration in quark matter. Based on a generalized Beth-Uhlenbeck approach to mesons and diquarks we present the thermodynamics of this system including the Mott dissociation of mesons and diquarks at finite temperature. A striking result is the suppression of the diquark abundance below the chiral restoration temperature by the coupling to the Polyakov loop, because of their color degree of freedom. This is understood in close analogy to the suppression of quark distributions by the same mechanism. Mesons as color singlets are unaffected by the Polyakov-loop suppression. At temperatures above the chiral restoration mesons and diquarks are both suppressed due to the Mott effect, whereby the positive resonance contribution to the pressure is largely compensated by the negative scattering contribution in accordance with the Levinson theorem.

  14. Constituent Quarks and Gluons, Polyakov loop and the Hadron Resonance Gas Model ***

    NASA Astrophysics Data System (ADS)

    Megías, E.; Ruiz Arriola, E.; Salcedo, L. L.

    2014-03-01

    Based on first principle QCD arguments, it has been argued in [1] that the vacuum expectation value of the Polyakov loop can be represented in the hadron resonance gas model. We study this within the Polyakov-constituent quark model by implementing the quantum and local nature of the Polyakov loop [2, 3]. The existence of exotic states in the spectrum is discussed. Presented by E. Megías at the International Nuclear Physics Conference INPC 2013, 2-7 June 2013, Firenze, Italy.Supported by Plan Nacional de Altas Energías (FPA2011-25948), DGI (FIS2011-24149), Junta de Andalucía grant FQM-225, Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), Spanish MINECO's Centro de Excelencia Severo Ochoa Program grant SEV-2012-0234, and the Juan de la Cierva Program.

  15. Exploring the role of model parameters and regularization procedures in the thermodynamics of the PNJL model

    SciTech Connect

    Ruivo, M. C.; Costa, P.; Sousa, C. A. de; Hansen, H.

    2010-08-05

    The equation of state and the critical behavior around the critical end point are studied in the framework of the Polyakov-Nambu-Jona-Lasinio model. We prove that a convenient choice of the model parameters is crucial to get the correct description of isentropic trajectories. The physical relevance of the effects of the regularization procedure is insured by the agreement with general thermodynamic requirements. The results are compared with simple thermodynamic expectations and lattice data.

  16. Polyakov loop extended Nambu Jona-Lasinio model with imaginary chemical potential

    NASA Astrophysics Data System (ADS)

    Sakai, Yuji; Kashiwa, Kouji; Kouno, Hiroaki; Yahiro, Masanobu

    2008-03-01

    The Polyakov loop extended Nambu Jona-Lasinio (PNJL) model with imaginary chemical potential is studied. The model possesses the extended Z3 symmetry that QCD does. Quantities invariant under the extended Z3 symmetry, such as the partition function, the chiral condensate, and the modified Polyakov loop, have Roberge-Weiss periodicity. The phase diagram of confinement/deconfinement transition derived with the PNJL model is consistent with the Roberge-Weiss prediction on it and the results of lattice QCD. The phase diagram of chiral transition is also presented by the PNJL model.

  17. Polyakov loop extended Nambu-Jona-Lasinio model with imaginary chemical potential

    SciTech Connect

    Sakai, Yuji; Kashiwa, Kouji; Yahiro, Masanobu; Kouno, Hiroaki

    2008-03-01

    The Polyakov loop extended Nambu-Jona-Lasinio (PNJL) model with imaginary chemical potential is studied. The model possesses the extended Z{sub 3} symmetry that QCD does. Quantities invariant under the extended Z{sub 3} symmetry, such as the partition function, the chiral condensate, and the modified Polyakov loop, have Roberge-Weiss periodicity. The phase diagram of confinement/deconfinement transition derived with the PNJL model is consistent with the Roberge-Weiss prediction on it and the results of lattice QCD. The phase diagram of chiral transition is also presented by the PNJL model.

  18. Extensions and further applications of the nonlocal Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Hell, T.; Weise, W.; Kashiwa, K.

    2011-06-01

    The nonlocal Polyakov-loop-extended Nambu-Jona-Lasinio model is further improved by including momentum-dependent wave-function renormalization in the quark quasiparticle propagator. Both two- and three-flavor versions of this improved Polyakov-loop-extended Nambu-Jona-Lasinio model are discussed, the latter with inclusion of the (nonlocal) 't Hooft-Kobayashi-Maskawa determinant interaction in order to account for the axial U(1) anomaly. Thermodynamics and phases are investigated and compared with recent lattice-QCD results.

  19. Phase transition of strongly interacting matter with a chemical potential dependent Polyakov loop potential

    NASA Astrophysics Data System (ADS)

    Shao, Guo-yun; Tang, Zhan-duo; Di Toro, Massimo; Colonna, Maria; Gao, Xue-yan; Gao, Ning

    2016-07-01

    We construct a hadron-quark two-phase model based on the Walecka-quantum hadrodynamics and the improved Polyakov-Nambu-Jona-Lasinio (PNJL) model with an explicit chemical potential dependence of Polyakov loop potential (μ PNJL model). With respect to the original PNJL model, the confined-deconfined phase transition is largely affected at low temperature and large chemical potential. Using the two-phase model, we investigate the equilibrium transition between hadronic and quark matter at finite chemical potentials and temperatures. The numerical results show that the transition boundaries from nuclear to quark matter move towards smaller chemical potential (lower density) when the μ -dependent Polyakov loop potential is taken. In particular, for charge asymmetric matter, we compute the local asymmetry of u , d quarks in the hadron-quark coexisting phase, and analyze the isospin-relevant observables possibly measurable in heavy-ion collision (HIC) experiments. In general new HIC data on the location and properties of the mixed phase would bring relevant information on the expected chemical potential dependence of the Polyakov loop contribution.

  20. Rotations of the Regular Polyhedra

    ERIC Educational Resources Information Center

    Jones, MaryClara; Soto-Johnson, Hortensia

    2006-01-01

    The study of the rotational symmetries of the regular polyhedra is important in the classroom for many reasons. Besides giving the students an opportunity to visualize in three dimensions, it is also an opportunity to relate two-dimensional and three-dimensional concepts. For example, rotations in R[superscript 2] require a point and an angle of…

  1. Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.

    PubMed

    Andreev, Oleg

    2009-05-29

    We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range. PMID:19519096

  2. Geometrical interpretation of the Knizhnik-Polyakov-Zamolodchikov exponents

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Anagnostopoulos, K. N.; Magnea, U.; Thorleifsson, G.

    1996-02-01

    We provide evidence that the KPZ exponents in two-dimensional quantum gravity can be interpreted as scaling exponents of correlation functions which are functions of the invariant geodesic distance between the fields.

  3. Hydrodynamics of the Polyakov line in SU(Nc) Yang-Mills

    NASA Astrophysics Data System (ADS)

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2016-02-01

    We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite Nc for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of Nc, and are consistent with the string model results at Nc = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out of equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z (Nc) bubble using a piece-wise sound wave is suggested.

  4. Thermodynamics of a three-flavor nonlocal Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.

    2010-04-01

    The present work generalizes a nonlocal version of the Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL) model to the case of three active quark flavors, with inclusion of the axial U(1) anomaly. Gluon dynamics is incorporated through a gluonic background field, expressed in terms of the Polyakov loop. The thermodynamics of the nonlocal PNJL model accounts for both chiral and deconfinement transitions. Our results obtained in mean-field approximation are compared to lattice QCD results for N{sub f}=2+1 quark flavors. Additional pionic and kaonic contributions to the pressure are calculated in random phase approximation. Finally, this nonlocal three-flavor PNJL model is applied to the finite density region of the QCD phase diagram. It is confirmed that the existence and location of a critical point in this phase diagram depend sensitively on the strength of the axial U(1) breaking interaction.

  5. Crystalline ground states in Polyakov-loop extended Nambu-Jona-Lasinio models

    NASA Astrophysics Data System (ADS)

    Braun, Jens; Karbstein, Felix; Rechenberger, Stefan; Roscher, Dietrich

    2016-01-01

    Nambu-Jona-Lasinio-type models have been used extensively to study the dynamics of the theory of the strong interaction at finite temperature and quark chemical potential on a phenomenological level. In addition to these studies, which are often performed under the assumption that the ground state of the theory is homogeneous, searches for the existence of crystalline phases associated with inhomogeneous ground states have attracted a lot of interest in recent years. In this work, we study the Polyakov-loop extended Nambu-Jona-Lasinio model using two prominent parametrizations and find that the existence of a crystalline phase is stable against a variation of the parametrization of the underlying Polyakov loop potential.

  6. Average phase factor in the Polyakov-loop extended Nambu-Jona-Lasinio model

    SciTech Connect

    Sakai, Yuji; Sasaki, Takahiro; Yahiro, Masanobu; Kouno, Hiroaki

    2010-11-01

    The average phase factor } of the QCD determinant is evaluated at the finite quark chemical potential ({mu}{sub q}) with the two-flavor version of the Polyakov-loop extended Nambu-Jona-Lasinio model with the scalar-type eight-quark interaction. For {mu}{sub q} larger than half the pion mass m{sub {pi}} at vacuum, } is finite only when the Polyakov loop is larger than {approx}0.5, indicating that lattice QCD is feasible only in the deconfinement phase. A critical end point lies in the region of }=0. The scalar-type eight-quark interaction makes it shorter a relative distance of the critical end point to the boundary of the region. For {mu}{sub q}Polyakov-loop extended Nambu-Jona-Lasinio model with dynamical mesonic fluctuations can reproduce lattice QCD data below the critical temperature.

  7. Dilepton and photon production in the presence of a nontrivial Polyakov loop

    NASA Astrophysics Data System (ADS)

    Hidaka, Yoshimasa; Lin, Shu; Pisarski, Robert D.; Satow, Daisuke

    2015-10-01

    We calculate the production of dileptons and photons in the presence of a nontrivial Polyakov loop in QCD. This is applicable to the semi-Quark Gluon Plasma (QGP), at temperatures above but near the critical temperature for deconfinement. The Polyakov loop is small in the semi-QGP, and near unity in the perturbative QGP. Working to leading order in the coupling constant of QCD, we find that there is a mild enhancement, ˜ 20%, for dilepton production in the semi-QGP over that in the perturbative QGP. In contrast, we find that photon production is strongly suppressed in the semi-QGP, by about an order of magnitude, relative to the perturbative QGP. In the perturbative QGP photon production contains contributions from 2 → 2 scattering and collinear emission with the Landau-Pomeranchuk-Migdal (LPM) effect. In the semi-QGP we show that the two contributions are modified differently. The rate for 2 → 2 scattering is suppressed by a factor which depends upon the Polyakov loop. In contrast, in an SU( N ) gauge theory the collinear rate is suppressed by 1 /N , so that the LPM effect vanishes at N = ∞. To leading order in the semi-QGP at large N , we compute the rate from 2 → 2 scattering to the leading logarithmic order and the collinear rate to leading order.

  8. Correlation between conserved charges in Polyakov-Nambu-Jona-Lasinio model with multiquark interactions

    SciTech Connect

    Bhattacharyya, Abhijit; Deb, Paramita; Lahiri, Anirban; Ray, Rajarshi

    2011-01-01

    We present a study of correlations among conserved charges like baryon number, electric charge and strangeness in the framework of 2+1 flavor Polyakov loop extended Nambu-Jona-Lasinio model at vanishing chemical potentials, up to fourth order. Correlations up to second order have been measured in lattice QCD, which compares well with our estimates given the inherent difference in the pion masses in the two systems. Possible physical implications of these correlations and their importance in understanding the matter obtained in heavy-ion collisions are discussed. We also present a comparison of the results with the commonly used unbound effective potential in the quark sector of this model.

  9. Transport Code for Regular Triangular Geometry

    Energy Science and Technology Software Center (ESTSC)

    1993-06-09

    DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.

  10. Dynamics and thermodynamics of a nonlocal Polyakov--Nambu--Jona-Lasinio model with running coupling

    SciTech Connect

    Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.

    2009-01-01

    A nonlocal covariant extension of the two-flavor Nambu and Jona-Lasinio model is constructed, with built-in constraints from the running coupling of QCD at high-momentum and instanton physics at low-momentum scales. Chiral low-energy theorems and basic current algebra relations involving pion properties are shown to be reproduced. The momentum-dependent dynamical quark mass derived from this approach is in agreement with results from Dyson-Schwinger equations and lattice QCD. At finite temperature, inclusion of the Polyakov loop and its gauge invariant coupling to quarks reproduces the dynamical entanglement of the chiral and deconfinement crossover transitions as in the (local) Polyakov-loop-extended Nambu and Jona-Lasinio model, but now without the requirement of introducing an artificial momentum cutoff. Steps beyond the mean-field approximation are made including mesonic correlations through quark-antiquark ring summations. Various quantities of interest (pressure, energy density, speed of sound, etc.) are calculated and discussed in comparison with lattice QCD thermodynamics at zero chemical potential. The extension to finite quark chemical potential and the phase diagram in the (T,{mu})-plane are also discussed.

  11. Nonlocal Polyakov-Nambu-Jona-Lasinio model and imaginary chemical potential

    NASA Astrophysics Data System (ADS)

    Kashiwa, Kouji; Hell, Thomas; Weise, Wolfram

    2011-09-01

    With the aim of setting constraints for the modeling of the QCD phase diagram, the phase structure of the two-flavor Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL) model is investigated in the range of imaginary chemical potentials (μI) and compared with available Nf=2 lattice QCD results. The calculations are performed using the advanced nonlocal version of the PNJL model with the inclusion of vector-type quasiparticle interactions between quarks, and with wave-function-renormalization corrections. It is demonstrated that the nonlocal PNJL model reproduces important features of QCD at finite μI, such as the Roberge-Weiss (RW) periodicity and the RW transition. Chiral and deconfinement transition temperatures for Nf=2 turn out to coincide both at zero chemical potential and at finite μI. Detailed studies are performed concerning the RW endpoint and its neighborhood where a first-order transition occurs.

  12. Resonances and bound states of the 't Hooft-Polyakov monopole

    SciTech Connect

    Russell, K. M.; Schroers, B. J.

    2011-03-15

    We present a systematic approach to the linearized Yang-Mills-Higgs equations in the background of a 't Hooft-Polyakov monopole and use it to unify and extend previous studies of their spectral properties. We show that a quaternionic formulation allows for a compact and efficient treatment of the linearized equations in the Bogomol'nyi-Prasad-Sommerfield limit of vanishing Higgs self-coupling and use it to study both scattering and bound states. We focus on the sector of vanishing generalized angular momentum and analyze it numerically, putting zero-energy bound states, Coulomb bound states, and infinitely many Feshbach resonances into a coherent picture. We also consider the linearized Yang-Mills-Higgs equations with nonvanishing Higgs self-coupling and confirm the occurrence of Feshbach resonances in this situation.

  13. Polyakov loop in 2 +1 flavor QCD from low to high temperatures

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Brambilla, N.; Ding, H.-T.; Petreczky, P.; Schadler, H.-P.; Vairo, A.; Weber, J. H.; Tumqcd Collaboration

    2016-06-01

    We study the free energy of a static quark in QCD with 2 +1 flavors in a wide temperature region, 116 MeV Polyakov loop susceptibilities using gradient flow. We discuss the implications of our findings for the deconfinement and chiral crossover phenomena at physical values of the quark masses. Finally a comparison of the lattice results at high temperatures with the weak-coupling calculations is presented.

  14. Operator regularization and quantum gravity

    NASA Astrophysics Data System (ADS)

    Mann, R. B.; Tarasov, L.; Mckeon, D. G. C.; Steele, T.

    1989-01-01

    Operator regularization has been shown to be a symmetry preserving means of computing Green functions in gauge symmetric and supersymmetric theories which avoids the explicit occurrence of divergences. In this paper we examine how this technique can be applied to computing quantities in non-renormalizable theories in general and quantum gravity in particular. Specifically, we consider various processes to one- and two-loop order in φ4N theory for N > 4 for which the theory is non-renormalizable. We then apply operator regularization to determine the one-loop graviton correction to the spinor propagator. The effective action for quantum scalars in a background gravitational field is evaluated in operator regularization using both the weak-field method and the normal coordinate expansion. This latter case yields a new derivation of the Schwinger-de Witt expansion which avoids the use of recursion relations. Finally we consider quantum gravity coupled to scalar fields in n dimensions, evaluating those parts of the effective action that (in other methods) diverge as n → 4. We recover the same divergence structure as is found using dimensional regularization if n ≠ 4, but if n = 4 at the outset no divergence arises at any stage of the calculation. The non-renormalizability of such theories manifests itself in the scale-dependence at one-loop order of terms that do not appear in the original lagrangian. In all cases our regularization procedure does not break any invariances present in the theory and avoids the occurence of explicit divergences.

  15. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  16. Continuum regularization of gauge theory with fermions

    SciTech Connect

    Chan, H.S.

    1987-03-01

    The continuum regularization program is discussed in the case of d-dimensional gauge theory coupled to fermions in an arbitrary representation. Two physically equivalent formulations are given. First, a Grassmann formulation is presented, which is based on the two-noise Langevin equations of Sakita, Ishikawa and Alfaro and Gavela. Second, a non-Grassmann formulation is obtained by regularized integration of the matter fields within the regularized Grassmann system. Explicit perturbation expansions are studied in both formulations, and considerable simplification is found in the integrated non-Grassmann formalism.

  17. The consequences of SU (3) colorsingletness, Polyakov Loop and Z (3) symmetry on a quark-gluon gas

    NASA Astrophysics Data System (ADS)

    Aminul Islam, Chowdhury; Abir, Raktim; Mustafa, Munshi G.; Ray, Rajarshi; Ghosh, Sanjay K.

    2014-02-01

    Based on quantum statistical mechanics, we show that the SU(3) color singlet ensemble of a quark-gluon gas exhibits a Z(3) symmetry through the normalized character in fundamental representation and also becomes equivalent, within a stationary point approximation, to the ensemble given by Polyakov Loop. In addition, a Polyakov Loop gauge potential is obtained by considering spatial gluons along with the invariant Haar measure at each space point. The probability of the normalized character in SU(3) vis-a-vis a Polyakov Loop is found to be maximum at a particular value, exhibiting a strong color correlation. This clearly indicates a transition from a color correlated to an uncorrelated phase, or vice versa. When quarks are included in the gauge fields, a metastable state appears in the temperature range 145 ⩽ T(MeV) ⩽ 170 due to the explicit Z(3) symmetry breaking in the quark-gluon system. Beyond T ⩾ 170 MeV, the metastable state disappears and stable domains appear. At low temperatures, a dynamical recombination of ionized Z(3) color charges to a color singlet Z(3) confined phase is evident, along with a confining background that originates due to the circulation of two virtual spatial gluons, but with conjugate Z(3) phases in a closed loop. We also discuss other possible consequences of the center domains in the color deconfined phase at high temperatures. Communicated by Steffen Bass

  18. Spinodal instabilities of baryon-rich quark-gluon plasma in the Polyakov-Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Li, Feng; Ko, Che Ming

    2016-03-01

    Using the Polyakov-Nambu-Jona-Lasinio model, we study the spinodal instability of a baryon-rich quark-gluon plasma in the linear response theory. We find that the spinodal unstable region in the temperature and density plane shrinks with increasing wave number of the unstable mode and is also reduced if the effect of the Polyakov loop is not included. In the small wave number or long wavelength limit, the spinodal boundaries in both cases of with and without the Polyakov loop coincide with those determined from the isothermal spinodal instability in the thermodynamic approach. Also, the vector interactions among quarks are found to suppress unstable modes of all wave numbers. Moreover, the growth rate of unstable modes initially increases with the wave number but is reduced when the wave number becomes large. Including the collisional effect from quark scattering via the linearized Boltzmann equation, we further find that it decreases the growth rate of unstable modes of all wave numbers. The relevance of these results to relativistic heavy ion collisions is discussed.

  19. Regular FPGA based on regular fabric

    NASA Astrophysics Data System (ADS)

    Xun, Chen; Jianwen, Zhu; Minxuan, Zhang

    2011-08-01

    In the sub-wavelength regime, design for manufacturability (DFM) becomes increasingly important for field programmable gate arrays (FPGAs). In this paper, an automated tile generation flow targeting micro-regular fabric is reported. Using a publicly accessible, well-documented academic FPGA as a case study, we found that compared to the tile generators previously reported, our generated micro-regular tile incurs less than 10% area overhead, which could be potentially recovered by process window optimization, thanks to its superior printability. In addition, we demonstrate that on 45 nm technology, the generated FPGA tile reduces lithography induced process variation by 33%, and reduce probability of failure by 21.2%. If a further overhead of 10% area can be recovered by enhanced resolution, we can achieve the variation reduction of 93.8% and reduce the probability of failure by 16.2%.

  20. Regular gravitational lagrangians

    NASA Astrophysics Data System (ADS)

    Dragon, Norbert

    1992-02-01

    The Einstein action with vanishing cosmological constant is for appropriate field content the unique local action which is regular at the fixed point of affine coordinate transformations. Imposing this regularity requirement one excludes also Wess-Zumino counterterms which trade gravitational anomalies for Lorentz anomalies. One has to expect dilatational and SL (D) anomalies. If these anomalies are absent and if the regularity of the quantum vertex functional can be controlled then Einstein gravity is renormalizable. On leave of absence from Institut für Theoretische Physik, Universität Hannover, W-3000 Hannover 1, FRG.

  1. Meson properties at finite temperature in a three flavor nonlocal chiral quark model with Polyakov loop

    SciTech Connect

    Contrera, G. A.; Dumm, D. Gomez; Scoccola, Norberto N.

    2010-03-01

    We study the finite temperature behavior of light scalar and pseudoscalar meson properties in the context of a three-flavor nonlocal chiral quark model. The model includes mixing with active strangeness degrees of freedom, and takes care of the effect of gauge interactions by coupling the quarks with the Polyakov loop. We analyze the chiral restoration and deconfinement transitions, as well as the temperature dependence of meson masses, mixing angles and decay constants. The critical temperature is found to be T{sub c{approx_equal}}202 MeV, in better agreement with lattice results than the value recently obtained in the local SU(3) PNJL model. It is seen that above T{sub c} pseudoscalar meson masses get increased, becoming degenerate with the masses of their chiral partners. The temperatures at which this matching occurs depend on the strange quark composition of the corresponding mesons. The topological susceptibility shows a sharp decrease after the chiral transition, signalling the vanishing of the U(1){sub A} anomaly for large temperatures.

  2. Topological Symmetry, Spin Liquids and CFT Duals of Polyakov Model with Massless Fermions

    SciTech Connect

    Unsal, Mithat

    2008-04-30

    We prove the absence of a mass gap and confinement in the Polyakov model with massless complex fermions in any representation of the gauge group. A U(1){sub *} topological shift symmetry protects the masslessness of one dual photon. This symmetry emerges in the IR as a consequence of the Callias index theorem and abelian duality. For matter in the fundamental representation, the infrared limits of this class of theories interpolate between weakly and strongly coupled conformal field theory (CFT) depending on the number of flavors, and provide an infinite class of CFTs in d = 3 dimensions. The long distance physics of the model is same as certain stable spin liquids. Altering the topology of the adjoint Higgs field by turning it into a compact scalar does not change the long distance dynamics in perturbation theory, however, non-perturbative effects lead to a mass gap for the gauge fluctuations. This provides conceptual clarity to many subtle issues about compact QED{sub 3} discussed in the context of quantum magnets, spin liquids and phase fluctuation models in cuprate superconductors. These constructions also provide new insights into zero temperature gauge theory dynamics on R{sup 2,1} and R{sup 2,1} x S{sup 1}. The confined versus deconfined long distance dynamics is characterized by a discrete versus continuous topological symmetry.

  3. An effective thermodynamic potential from the instanton vacuum with the Polyakov loop

    NASA Astrophysics Data System (ADS)

    Nam, Seung-Il

    2012-02-01

    In this talk, we report our recent studies on an effective thermodynamic potential (Ωeff) at finite temperature (T ≠ 0) and zero quark-chemical potential (μR = 0), using the singular-gauge instanton solution and Matsubara formula for Nc = 3 and Nf = 2 in the chiral limit, i.e. mq = 0. The momentum-dependent constituent-quark mass is computed as a function of T, together with the Harrington-Shepard caloron solution in the large-Nc limit. In addition, we take into account the imaginary quark-chemical potential μI ≡ A4, identified as the traced Polyakov-loop (Φ) as an order parameter for the ℤ(Nc) symmetry, characterizing the confinement (intact) and deconfinement (spontaneously broken) phases. As a consequence, we observe the crossover of the chiral (χ) order parameter σ2 and Φ. It also turns out that the critical temperature for the deconfinement phase transition, Tcℤ is lowered by about (5 ~10)% in comparison to the case with the constant constituent-quark mass. This behavior can be understood by considerable effects from the partial chiral restoration and nontrivial QCD vacuum on the Φ. Numerical results show that the crossover transitions occur at (Tcχ, Tcℤ) ≈ (216, 227) MeV.

  4. Thermodynamics and quark susceptibilities: A Monte Carlo approach to the Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Cristoforetti, M.; Hell, T.; Klein, B.; Weise, W.

    2010-06-01

    The Monte-Carlo method is applied to the Polyakov-loop extended Nambu-Jona-Lasinio model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor nondiagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.

  5. Vector meson spectral function and dilepton rate in the presence of strong entanglement effect between the chiral and the Polyakov loop dynamics

    NASA Astrophysics Data System (ADS)

    Islam, Chowdhury Aminul; Majumder, Sarbani; Mustafa, Munshi G.

    2015-11-01

    In this work we have reexplored our earlier study on the vector meson spectral function and its spectral property in the form of dilepton rate in a two-flavor Polyakov loop extended Nambu-Jona-Lasinio (PNJL) model in the presence of a strong entanglement between the chiral and Polyakov loop dynamics. The entanglement considered here is generated through the four-quark scalar-type interaction in which the coupling strength depends on the Polyakov loop and runs with temperature and chemical potential. The entanglement effect is also considered for the four-quark vector-type interaction in the same manner. We observe that the entanglement effect relatively enhances the color degrees of freedom due to the running of both the scalar and vector couplings. This modifies the vector meson spectral function and, thus, the spectral property such as the dilepton production rate in the low invariant mass also gets modified.

  6. Regularized Structural Equation Modeling

    PubMed Central

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  7. Nonlocal Polyakov-Nambu-Jona-Lasinio model with wave function renormalization at finite temperature and chemical potential

    SciTech Connect

    Contrera, G. A.; Orsaria, M.; Scoccola, N. N.

    2010-09-01

    We study the phase diagram of strongly interacting matter in the framework of a nonlocal SU(2) chiral quark model which includes wave function renormalization and coupling to the Polyakov loop. Both nonlocal interactions based on the frequently used exponential form factor, and on fits to the quark mass and renormalization functions obtained in lattice calculations are considered. Special attention is paid to the determination of the critical points, both in the chiral limit and at finite quark mass. In particular, we study the position of the critical end point as well as the value of the associated critical exponents for different model parametrizations.

  8. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  9. Krein regularization of QED

    NASA Astrophysics Data System (ADS)

    Forghan, B.; Takook, M. V.; Zarei, A.

    2012-09-01

    In this paper, the electron self-energy, photon self-energy and vertex functions are explicitly calculated in Krein space quantization including quantum metric fluctuation. The results are automatically regularized or finite. The magnetic anomaly and Lamb shift are also calculated in the one loop approximation in this method. Finally, the obtained results are compared to conventional QED results.

  10. Geometry of spinor regularization

    NASA Technical Reports Server (NTRS)

    Hestenes, D.; Lounesto, P.

    1983-01-01

    The Kustaanheimo theory of spinor regularization is given a new formulation in terms of geometric algebra. The Kustaanheimo-Stiefel matrix and its subsidiary condition are put in a spinor form directly related to the geometry of the orbit in physical space. A physically significant alternative to the KS subsidiary condition is discussed. Derivations are carried out without using coordinates.

  11. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system. PMID:25019866

  12. Gauge approach to gravitation and regular Big Bang theory

    NASA Astrophysics Data System (ADS)

    Minkevich, A. V.

    2006-03-01

    Field theoretical scheme of regular Big Bang in 4-dimensional physical space-time, built in the framework of gauge approach to gravitation, is discussed. Regular bouncing character of homogeneous isotropic cosmological models is ensured by gravitational repulsion effect at extreme conditions without quantum gravitational corrections. The most general properties of regular inflationary cosmological models are examined. Developing theory is valid, if energy density of gravitating matter is positive and energy dominance condition is fulfilled.

  13. 2+1 flavor Polyakov Nambu Jona-Lasinio model at finite temperature and nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Fu, Wei-Jie; Zhang, Zhao; Liu, Yu-Xin

    2008-01-01

    We extend the Polyakov-loop improved Nambu Jona-Lasinio model to 2+1 flavor case to study the chiral and deconfinement transitions of strongly interacting matter at finite temperature and nonzero chemical potential. The Polyakov loop, the chiral susceptibility of light quarks (u and d), and the strange quark number susceptibility as functions of temperature at zero chemical potential are determined and compared with the recent results of lattice QCD simulations. We find that there is always an inflection point in the curve of strange quark number susceptibility accompanying the appearance of the deconfinement phase, which is consistent with the result of lattice QCD simulations. Predictions for the case at nonzero chemical potential and finite temperature are made as well. We give the phase diagram in terms of the chemical potential and temperature and find that the critical end point moves down to low temperature and finally disappears with the decrease of the strength of the ’t Hooft flavor-mixing interaction.

  14. Perturbations in a regular bouncing universe

    SciTech Connect

    Battefeld, T.J.; Geshnizjani, G.

    2006-03-15

    We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.

  15. Phase diagram of baryon matter in the SU(2) Nambu – Jona-Lasinio model with a Polyakov loop

    NASA Astrophysics Data System (ADS)

    Kalinovsky, Yu L.; Toneev, V. D.; Friesen, A. V.

    2016-04-01

    The nature of phase transitions in hot and dense nuclear matter is discussed in the framework of the effective SU(2) Nambu – Jona-Lasinio model with a Polyakov loop with two quark flavor — one of a few models describing the properties of both chiral and confinement-deconfinement phase transitions. We consider the parameters of the model and examine additional interactions that influence the structure of the phase diagram and the positions of critical points in it. The effect of meson correlations on the thermodynamic properties of the quark-meson system is examined. The evolution of the model with changes in the understanding of the phase diagram structure is discussed.

  16. Dimensional Reduction and Hadronic Processes

    SciTech Connect

    Signer, Adrian; Stoeckinger, Dominik

    2008-11-23

    We consider the application of regularization by dimensional reduction to NLO corrections of hadronic processes. The general collinear singularity structure is discussed, the origin of the regularization-scheme dependence is identified and transition rules to other regularization schemes are derived.

  17. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part I: Formulation and one-dimensional characterization

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    The motion of microstructural interfaces is important in modeling twinning and structural phase transformations. Continuum models fall into two classes: sharp-interface models, where interfaces are singular surfaces; and regularized-interface models, such as phase-field models, where interfaces are smeared out. The former are challenging for numerical solutions because the interfaces need to be explicitly tracked, but have the advantage that the kinetics of existing interfaces and the nucleation of new interfaces can be transparently and precisely prescribed. In contrast, phase-field models do not require explicit tracking of interfaces, thereby enabling relatively simple numerical calculations, but the specification of kinetics and nucleation is both restrictive and extremely opaque. This prevents straightforward calibration of phase-field models to experiment and/or molecular simulations, and breaks the multiscale hierarchy of passing information from atomic to continuum. Consequently, phase-field models cannot be confidently used in dynamic settings. This shortcoming of existing phase-field models motivates our work. We present the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients are a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation - through the source term of the conservation law - and kinetics - through a distinct interfacial velocity field. A formal limit of the kinetic driving force recovers the classical continuum sharp-interface driving force, providing confidence in both the re-parametrized energy and the evolution statement. We present some 1D calculations characterizing the formulation; in a

  18. Convex nonnegative matrix factorization with manifold regularization.

    PubMed

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. PMID:25523040

  19. D. -->. -infinity saddle-point spectrum analysis of the open bosonic Polyakov string in R/sup D/SO(N)

    SciTech Connect

    Botelho, L.C.L.

    1987-02-15

    In this paper, we investigate the role of the chiral anomaly in determining the spectrum at the saddle-point approximation D..-->..-infinity of the recently considered Polyakov formulation of bosonic strings moving in R/sup D/ x G with K = 2, where G is the group manifold SO(N). The main result is, opposite to the critical dimension, that the spectrum is not sensitive to the model chiral anomaly in the D..-->..-infinity limit.

  20. Some results on the spectra of strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Vieira, Luís António de Almeida; Mano, Vasco Moço

    2016-06-01

    Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.

  1. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  2. 75 FR 53966 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm Credit Administration in...

  3. Regularly timed events amid chaos.

    PubMed

    Blakely, Jonathan N; Cooper, Roy M; Corron, Ned J

    2015-11-01

    We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events. PMID:26651759

  4. Natural selection and mechanistic regularity.

    PubMed

    DesAutels, Lane

    2016-06-01

    In this article, I address the question of whether natural selection operates regularly enough to qualify as a mechanism of the sort characterized by Machamer, Darden, and Craver (2000). Contrary to an influential critique by Skipper and Millstein (2005), I argue that natural selection can be seen to be regular enough to qualify as an MDC mechanism just fine-as long as we pay careful attention to some important distinctions regarding mechanistic regularity and abstraction. Specifically, I suggest that when we distinguish between process vs. product regularity, mechanism-internal vs. mechanism-external sources of irregularity, and abstract vs. concrete regularity, we can see that natural selection is only irregular in senses that are unthreatening to its status as an MDC mechanism. PMID:26921876

  5. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part II: Two-dimensional characterization and boundary kinetics

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    A companion paper presented the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients were a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation through the source term of the conservation law and of kinetics through an interfacial velocity field. This model overcomes an important shortcoming of existing phase-field models, namely that the specification of kinetics and nucleation is both restrictive and extremely opaque. In this paper, we present a number of numerical calculations - in one and two dimensions - that characterize our formulation. These calculations illustrate (i) highly-sensitive rate-dependent nucleation; (ii) independent prescription of the forward and backward nucleation stresses without changing the energy landscape; (iii) stick-slip interface kinetics; (iii) the competition between nucleation and kinetics in determining the final microstructural state; (iv) the effect of anisotropic kinetics; and (v) the effect of non-monotone kinetics. These calculations demonstrate the ability of this formulation to precisely prescribe complex nucleation and kinetics in a simple and transparent manner. We also extend our conservation statement to describe the kinetics of the junction lines between microstructural interfaces and boundaries. This enables us to prescribe an additional kinetic relation for the boundary, and we examine the interplay between the bulk kinetics and the junction kinetics.

  6. Laplacian Regularized Low-Rank Representation and Its Applications.

    PubMed

    Yin, Ming; Gao, Junbin; Lin, Zhouchen

    2016-03-01

    Low-rank representation (LRR) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. For a given set of observed data corrupted with sparse errors, LRR aims at learning a lowest-rank representation of all data jointly. LRR has broad applications in pattern recognition, computer vision and signal processing. In the real world, data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space. However, the LRR method does not take into account the non-linear geometric structures within data, thus the locality and similarity information among data may be missing in the learning process. To improve LRR in this regard, we propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR). By taking advantage of the graph regularizer, our proposed method not only can represent the global low-dimensional structures, but also capture the intrinsic non-linear geometric information in data. The extensive experimental results on image clustering, semi-supervised image classification and dimensionality reduction tasks demonstrate the effectiveness of the proposed method. PMID:27046494

  7. NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION

    SciTech Connect

    CHARTRAND, RICK

    2007-01-16

    The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.

  8. Geometric continuum regularization of quantum field theory

    SciTech Connect

    Halpern, M.B. . Dept. of Physics)

    1989-11-08

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.

  9. Regular patterns stabilize auditory streams.

    PubMed

    Bendixen, Alexandra; Denham, Susan L; Gyimesi, Kinga; Winkler, István

    2010-12-01

    The auditory system continuously parses the acoustic environment into auditory objects, usually representing separate sound sources. Sound sources typically show characteristic emission patterns. These regular temporal sound patterns are possible cues for distinguishing sound sources. The present study was designed to test whether regular patterns are used as cues for source distinction and to specify the role that detecting these regularities may play in the process of auditory stream segregation. Participants were presented with tone sequences, and they were asked to continuously indicate whether they perceived the tones in terms of a single coherent sequence of sounds (integrated) or as two concurrent sound streams (segregated). Unknown to the participant, in some stimulus conditions, regular patterns were present in one or both putative streams. In all stimulus conditions, participants' perception switched back and forth between the two sound organizations. Importantly, regular patterns occurring in either one or both streams prolonged the mean duration of two-stream percepts, whereas the duration of one-stream percepts was unaffected. These results suggest that temporal regularities are utilized in auditory scene analysis. It appears that the role of this cue lies in stabilizing streams once they have been formed on the basis of simpler acoustic cues. PMID:21218898

  10. Extended Locus of Regular Nuclei

    SciTech Connect

    Amon, L.; Casten, R. F.

    2007-04-23

    A new family of IBM Hamiltonians, characterized by certain parameter values, was found about 15 years ago by Alhassid and Whelan to display almost regular dynamics, and yet these solutions to the IBM do not belong to any of the known dynamical symmetry limits (vibrational, rotational and {gamma} - unstable). Rather, they comprise an 'Arc of Regularity' cutting through the interior of the symmetry triangle from U(5) to SU(3) where suddenly there is a decrease in chaoticity and a significant increase in regularity. A few years ago, the first set of nuclei lying along this arc was discovered. The purpose of the present work is to search more broadly in the nuclear chart at all nuclei from Z = 40 - 100 for other examples of such 'regular' nuclei. Using a unique signature for such nuclei involving energy differences of certain excited states, we have identified an additional set of 12 nuclei lying near or along the arc. Some of these nuclei are known to have low-lying intruder states and therefore care must be taken, however, in judging their structure. The regularity exhibited by nuclei near the arc presumably reflects the validity or partial validity of some new, as yet unknown, quantum number describing these systems and giving the regularity found for them.

  11. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art. PMID:26394426

  12. Regularization Analysis of SAR Superresolution

    SciTech Connect

    DELAURENTIS,JOHN M.; DICKEY,FRED M.

    2002-04-01

    Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. In a previous report the application of the concept to synthetic aperture radar was investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. This work treats the problem from the standpoint of regularization. Both the operator inversion approach and the regularization approach show that the ability to superresolve SAR imagery is severely limited by system noise.

  13. Features of the regular F2-layer

    NASA Astrophysics Data System (ADS)

    Besprozvannaia, A. S.

    1987-10-01

    Results of the empirical modeling of cyclic and seasonal variations of the daytime regular F2-layer are presented. It is shown that the formation of the seasonal anomaly in years of high solar activity is determined mainly by a summer anomaly. This summer anomaly is connected with an increase in the content of molecular nitrogen in the polar ionosphere during summer months due to additional heating and turbulent mixing in connection with intense dissipation of the three-dimensional current system under high-conductivity conditions. In solar-minimum years the seasonal anomaly is determined mainly by seasonal variations of the composition of the neutral atmosphere in the passage from winter to summer.

  14. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  15. Academic Improvement through Regular Assessment

    ERIC Educational Resources Information Center

    Wolf, Patrick J.

    2007-01-01

    Media reports are rife with claims that students in the United States are overtested and that they and their education are suffering as result. Here I argue the opposite--that students would benefit in numerous ways from more frequent assessment, especially of diagnostic testing. The regular assessment of students serves critical educational and…

  16. Temporal regularity in speech perception: Is regularity beneficial or deleterious?

    PubMed Central

    Geiser, Eveline; Shattuck-Hufnagel, Stefanie

    2012-01-01

    Speech rhythm has been proposed to be of crucial importance for correct speech perception and language learning. This study investigated the influence of speech rhythm in second language processing. German pseudo-sentences were presented to participants in two conditions: ‘naturally regular speech rhythm’ and an ‘emphasized regular rhythm'. Nine expert English speakers with 3.5±1.6 years of German training repeated each sentence after hearing it once over headphones. Responses were transcribed using the International Phonetic Alphabet and analyzed for the number of correct, false and missing consonants as well as for consonant additions. The over-all number of correct reproductions of consonants did not differ between the two experimental conditions. However, speech rhythmicization significantly affected the serial position curve of correctly reproduced syllables. The results of this pilot study are consistent with the view that speech rhythm is important for speech perception. PMID:22701753

  17. Distributional Stress Regularity: A Corpus Study

    ERIC Educational Resources Information Center

    Temperley, David

    2009-01-01

    The regularity of stress patterns in a language depends on "distributional stress regularity", which arises from the pattern of stressed and unstressed syllables, and "durational stress regularity", which arises from the timing of syllables. Here we focus on distributional regularity, which depends on three factors. "Lexical stress patterning"…

  18. Grouping pursuit through a regularization solution surface *

    PubMed Central

    Shen, Xiaotong; Huang, Hsin-Cheng

    2010-01-01

    Summary Extracting grouping structure or identifying homogenous subgroups of predictors in regression is crucial for high-dimensional data analysis. A low-dimensional structure in particular–grouping, when captured in a regression model, enables to enhance predictive performance and to facilitate a model's interpretability Grouping pursuit extracts homogenous subgroups of predictors most responsible for outcomes of a response. This is the case in gene network analysis, where grouping reveals gene functionalities with regard to progression of a disease. To address challenges in grouping pursuit, we introduce a novel homotopy method for computing an entire solution surface through regularization involving a piecewise linear penalty. This nonconvex and overcomplete penalty permits adaptive grouping and nearly unbiased estimation, which is treated with a novel concept of grouped subdifferentials and difference convex programming for efficient computation. Finally, the proposed method not only achieves high performance as suggested by numerical analysis, but also has the desired optimality with regard to grouping pursuit and prediction as showed by our theoretical results. PMID:20689721

  19. Adaptive regularization of earthquake slip distribution inversion

    NASA Astrophysics Data System (ADS)

    Wang, Chisheng; Ding, Xiaoli; Li, Qingquan; Shan, Xinjian; Zhu, Jiasong; Guo, Bo; Liu, Peng

    2016-04-01

    Regularization is a routine approach used in earthquake slip distribution inversion to avoid numerically abnormal solutions. To date, most slip inversion studies have imposed uniform regularization on all the fault patches. However, adaptive regularization, where each retrieved parameter is regularized differently, has exhibited better performances in other research fields such as image restoration. In this paper, we implement an investigation into adaptive regularization for earthquake slip distribution inversion. It is found that adaptive regularization can achieve a significantly smaller mean square error (MSE) than uniform regularization, if it is set properly. We propose an adaptive regularization method based on weighted total least squares (WTLS). This approach assumes that errors exist in both the regularization matrix and observation, and an iterative algorithm is used to solve the solution. A weight coefficient is used to balance the regularization matrix residual and the observation residual. An experiment using four slip patterns was carried out to validate the proposed method. The results show that the proposed regularization method can derive a smaller MSE than uniform regularization and resolution-based adaptive regularization, and the improvement in MSE is more significant for slip patterns with low-resolution slip patches. In this paper, we apply the proposed regularization method to study the slip distribution of the 2011 Mw 9.0 Tohoku earthquake. The retrieved slip distribution is less smooth and more detailed than the one retrieved with the uniform regularization method, and is closer to the existing slip model from joint inversion of the geodetic and seismic data.

  20. On the four-dimensional formulation of dimensionally regulated amplitudes

    NASA Astrophysics Data System (ADS)

    Fazio, A. R.; Mastrolia, P.; Mirabella, E.; Torres Bobadilla, W. J.

    2014-12-01

    Elaborating on the four-dimensional helicity scheme, we propose a pure four-dimensional formulation (FDF) of the -dimensional regularization of one-loop scattering amplitudes. In our formulation particles propagating inside the loop are represented by massive internal states regulating the divergences. The latter obey Feynman rules containing multiplicative selection rules which automatically account for the effects of the extra-dimensional regulating terms of the amplitude. We present explicit representations of the polarization and helicity states of the four-dimensional particles propagating in the loop. They allow for a complete, four-dimensional, unitarity-based construction of -dimensional amplitudes. Generalized unitarity within the FDF does not require any higher-dimensional extension of the Clifford and the spinor algebra. Finally we show how the FDF allows for the recursive construction of -dimensional one-loop integrands, generalizing the four-dimensional open-loop approach.

  1. Regularized image system for Stokes flow outside a solid sphere

    NASA Astrophysics Data System (ADS)

    Wróbel, Jacek K.; Cortez, Ricardo; Varela, Douglas; Fauci, Lisa

    2016-07-01

    The image system for a three-dimensional flow generated by regularized forces outside a solid sphere is formulated and implemented as an extension of the method of regularized Stokeslets. The method is based on replacing a point force given by a delta distribution with a smooth localized function and deriving the exact velocity field produced by the forcing. In order to satisfy zero-flow boundary conditions at a solid sphere, the image system for singular Stokeslets is generalized to give exact cancellation of the regularized flow at the surface of the sphere. The regularized image system contains the same elements as the singular counterpart but with coefficients that depend on a regularization parameter. As this parameter vanishes, the expressions reduce to the image system of the singular Stokeslet. The expression relating force and velocity can be inverted to compute the forces that generate a given velocity boundary condition elsewhere in the flow. We present several examples within the context of biological flows at the microscale in order to validate and highlight the usefulness of the image system in computations.

  2. Knowledge and regularity in planning

    NASA Technical Reports Server (NTRS)

    Allen, John A.; Langley, Pat; Matwin, Stan

    1992-01-01

    The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.

  3. Creating Two-Dimensional Nets of Three-Dimensional Shapes Using "Geometer's Sketchpad"

    ERIC Educational Resources Information Center

    Maida, Paula

    2005-01-01

    This article is about a computer lab project in which prospective teachers used Geometer's Sketchpad software to create two-dimensional nets for three-dimensional shapes. Since this software package does not contain ready-made tools for creating non-regular or regular polygons, the students used prior knowledge and geometric facts to create their…

  4. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  5. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  6. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  7. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  8. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  9. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  10. Natural frequency of regular basins

    NASA Astrophysics Data System (ADS)

    Tjandra, Sugih S.; Pudjaprasetya, S. R.

    2014-03-01

    Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.

  11. Pairing effect and misleading regularity

    NASA Astrophysics Data System (ADS)

    Al-Sayed, A.

    2015-11-01

    We study the nearest neighbor spacing distribution of energy levels of even-even nuclei classified according to their reduced electric quadrupole transition probability B (E2) ↑ using the available experimental data. We compare between Brody, and Abul-Magd distributions that extract the degree of chaoticity within nuclear dynamics. The results show that Abul-Magd parameter f can represents the chaotic behavior in more acceptable way than Brody, especially if a statistically significant study is desired. A smooth transition from chaos to order is observed as B (E2) ↑ increases. An apparent regularity was located at the second interval, namely: at 0.05 ≤ B (E2) < 0.1 in e2b2 units, and at 10 ≤ B (E2) < 15 in Weisskopf unit. Finally, the chaotic behavior parameterized in terms of B (E2) ↑ does not depend on the unit used.

  12. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  13. Wave dynamics of regular and chaotic rays

    SciTech Connect

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space.

  14. Energy Scaling Law for the Regular Cone

    NASA Astrophysics Data System (ADS)

    Olbermann, Heiner

    2016-04-01

    We consider a thin elastic sheet in the shape of a disk whose reference metric is that of a singular cone. That is, the reference metric is flat away from the center and has a defect there. We define a geometrically fully nonlinear free elastic energy and investigate the scaling behavior of this energy as the thickness h tends to 0. We work with two simplifying assumptions: Firstly, we think of the deformed sheet as an immersed 2-dimensional Riemannian manifold in Euclidean 3-space and assume that the exponential map at the origin (the center of the sheet) supplies a coordinate chart for the whole manifold. Secondly, the energy functional penalizes the difference between the induced metric and the reference metric in L^∞ (instead of, as is usual, in L^2). Under these assumptions, we show that the elastic energy per unit thickness of the regular cone in the leading order of h is given by C^*h^2|log h|, where the value of C^* is given explicitly.

  15. Existence, uniqueness, and equivalence theorems for magnetic monopoles in general (4p-1)-dimensional Yang-Mills theory

    SciTech Connect

    Gao Zhifeng; Zhang Jing

    2009-04-15

    In this paper, we use the method of calculus of variations to establish the existence of energy-minimizing radially symmetric magnetic monopole solutions in the general (4p-1)-dimensional Yang-Mills gauge field theory developed recently by Radu and Tchrakian. We also show that these solutions are either self-dual or anti-self-dual and, hence, unique. Our study extends the existence work of Belavin, Polyakov, Schwartz, and Tyupin and the equivalence and uniqueness work of Maison in three dimensions and the work of Yang in seven dimensions to the situation of arbitrary (4p-1) dimensions.

  16. Regularity vs genericity in the perception of collinearity.

    PubMed

    Feldman, J

    1996-01-01

    The perception of collinearity is investigated, with the focus on the minimal case of three dots. As suggested previously, from the standpoint of probabilistic inference, the observer must classify each dot triplet as having arisen either from a one-dimensional curvilinear process or from a two-dimensional patch. The normative distributions of triplets arising from these two classes are unavailable to the observer, and are in fact somewhat counterintuitive. Hence in order to classify triplets, the observer invents distributions for each of the two opposed types, 'regular' (collinear) triplets and 'generic' (ie not regular) triplets. The collinear prototype is centered at 0 degree (ie perfectly straight), whereas the generic prototype, contrary to the normative statistics, is centered at 120 degrees away from straight-apparently because this is the point most distant in triplet space from straight and thus creates the maximum possible contrast between the two prototypes. By default, these two processes are assumed to be equiprobable in the environment. An experiment designed to investigate how subjects' judgments are affected by conspicuous environmental deviations from this assumption is reported. The results suggest that observers react by elevating or depressing the expected probability of the generic prototype relative to the regular one, leaving the prototype structure otherwise intact. PMID:8804096

  17. Simulation Of Attenuation Regularity Of Detonation Wave In Pmma

    NASA Astrophysics Data System (ADS)

    Lan, Wei; Xiaomian, Hu

    2012-03-01

    Polymethyl methacrylate (PMMA) is often used as clapboard or protective medium in the parameter measurement of detonation wave propagation. Theoretical and experimental researches show that the pressure of shock wave in condensed material has the regularity of exponential attenuation with the distance of propagation. Simulation of detonation produced shock wave propagation in PMMA was conducted using a two-dimensional Lagrangian computational fluid dynamics program, and results were compared with the experimental data. Different charge diameters and different angles between the direction of detonation wave propagation and the normal direction of confined boundary were considered during the calculation. Results show that the detonation produced shock wave propagation in PMMA accords with the exponential regularity of shock wave attenuation in condensed material, and several factors are relevant to the attenuation coefficient, such as charge diameter and interface angle.

  18. Simulation of attenuation regularity of detonation wave in PMMA

    NASA Astrophysics Data System (ADS)

    Lan, Wei; Xiaomian, Hu

    2011-06-01

    Polymethyl methacrylate (PMMA) is often used as clapboard or protective medium in the parameter measurement of detonation wave propagation, due to its similar shock impedance with the explosive. Theoretical and experimental research show that the pressure of shock wave in condensed material has the regularity of exponential attenuation with the distance of propagation. Simulation of detonation wave propagation in PMMA is conducted using a two-dimensional Lagrangian computational fluid dynamics program, and results are compared with the experimental data. Different charge diameters and different angles between the direction of detonation wave propagation and the normal direction of confined boundary are considered during the calculation. Results show that the detonation wave propagation in PMMA accords with the exponential regularity of shock wave attenuation in condensed material, and several factors are relevant to the attenuation coefficient, such as charge diameter and interface angle.

  19. Manifestly scale-invariant regularization and quantum effective operators

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2016-05-01

    Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.

  20. Perfect state transfer over distance-regular spin networks

    SciTech Connect

    Jafarizadeh, M. A.; Sufiani, R.

    2008-02-15

    Christandl et al. have noted that the d-dimensional hypercube can be projected to a linear chain with d+1 sites so that, by considering fixed but different couplings between the qubits assigned to the sites, the perfect state transfer (PST) can be achieved over arbitrarily long distances in the chain [Phys. Rev. Lett. 92, 187902 (2004); Phys. Rev. A 71, 032312 (2005)]. In this work we consider distance-regular graphs as spin networks and note that any such network (not just the hypercube) can be projected to a linear chain and so can allow PST over long distances. We consider some particular spin Hamiltonians which are the extended version of those of Christandl et al. Then, by using techniques such as stratification of distance-regular graphs and spectral analysis methods, we give a procedure for finding a set of coupling constants in the Hamiltonians so that a particular state initially encoded on one site will evolve freely to the opposite site without any dynamical control, i.e., we show how to derive the parameters of the system so that PST can be achieved. It is seen that PST is only allowed in distance-regular spin networks for which, starting from an arbitrary vertex as reference vertex (prepared in the initial state which we wish to transfer), the last stratum of the networks with respect to the reference state contains only one vertex; i.e., stratification of these networks plays an important role which determines in which kinds of networks and between which vertices of them, PST can be allowed. As examples, the cycle network with even number of vertices and d-dimensional hypercube are considered in details and the method is applied for some important distance-regular networks.

  1. Digital image correlation involves an inverse problem: A regularization scheme based on subset size constraint

    NASA Astrophysics Data System (ADS)

    Zhan, Qin; Yuan, Yuan; Fan, Xiangtao; Huang, Jianyong; Xiong, Chunyang; Yuan, Fan

    2016-06-01

    Digital image correlation (DIC) is essentially implicated in a class of inverse problem. Here, a regularization scheme is developed for the subset-based DIC technique to effectively inhibit potential ill-posedness that likely arises in actual deformation calculations and hence enhance numerical stability, accuracy and precision of correlation measurement. With the aid of a parameterized two-dimensional Butterworth window, a regularized subpixel registration strategy is established, in which the amount of speckle information introduced to correlation calculations may be weighted through equivalent subset size constraint. The optimal regularization parameter associated with each individual sampling point is determined in a self-adaptive way by numerically investigating the curve of 2-norm condition number of coefficient matrix versus the corresponding equivalent subset size, based on which the regularized solution can eventually be obtained. Numerical results deriving from both synthetic speckle images and actual experimental images demonstrate the feasibility and effectiveness of the set of newly-proposed regularized DIC algorithms.

  2. Efficient determination of multiple regularization parameters in a generalized L-curve framework

    NASA Astrophysics Data System (ADS)

    Belge, Murat; Kilmer, Misha E.; Miller, Eric L.

    2002-08-01

    The selection of multiple regularization parameters is considered in a generalized L-curve framework. Multiple-dimensional extensions of the L-curve for selecting multiple regularization parameters are introduced, and a minimum distance function (MDF) is developed for approximating the regularization parameters corresponding to the generalized corner of the L-hypersurface. For the single-parameter (i.e. L-curve) case, it is shown through a model that the regularization parameters minimizing the MDF essentially maximize the curvature of the L-curve. Furthermore, for both the single-and multiple-parameter cases the MDF approach leads to a simple fixed-point iterative algorithm for computing regularization parameters. Examples indicate that the algorithm converges rapidly thereby making the problem of computing parameters according to the generalized corner of the L-hypersurface computationally tractable.

  3. Wavelet Regularization Per Nullspace Shuttle

    NASA Astrophysics Data System (ADS)

    Charléty, J.; Nolet, G.; Sigloch, K.; Voronin, S.; Loris, I.; Simons, F. J.; Daubechies, I.; Judd, S.

    2010-12-01

    Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization in wavelet space is a promising strategy to deal with the very heterogeneous data coverage in the Earth without sacrificing detail in the solution where this is resolved (see Loris et al., abstract this session). However, L1-norm minimizations are nonlinear, and pose problems of convergence speed when applied to large data sets. In an effort to speed up computations we investigate the application of the nullspace shuttle (Deal and Nolet, GJI 1996). The nullspace shuttle is a filter that adds components from the nullspace to the minimum norm solution so as to have the model satisfy additional conditions not imposed by the data. In our case, the nullspace shuttle projects the model on a truncated basis of wavelets. The convergence of this strategy is unproven, in contrast to algorithms using Landweber iteration or one of its variants, but initial computations using a very large data base give reason for optimism. We invert 430,554 P delay times measured by cross-correlation in different frequency windows. The data are dominated by observations with US Array, leading to a major discrepancy in the resolution beneath North America and the rest of the world. This is a subset of the data set inverted by Sigloch et al (Nature Geosci, 2008), excluding only a small number of ISC delays at short distance and all amplitude data. The model is a cubed Earth model with 3,637,248 voxels spanning mantle and crust, with a resolution everywhere better than 70 km, to which 1912 event corrections are added. In each iteration we determine the optimal solution by a least squares inversion with minimal damping, after which we regularize the model in wavelet space. We then compute the residual data vector (after an intermediate scaling step), and solve for a model correction until a satisfactory chi-square fit for the truncated model is obtained. We present our final results on convergence as well as a

  4. Parameter fitting in three-flavor Nambu-Jona-Lasinio model with various regularizations

    NASA Astrophysics Data System (ADS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2016-05-01

    We study the three-flavor Nambu-Jona-Lasinio model with various regularization procedures. We perform parameter fitting in each regularization and apply the obtained parameter sets to evaluate various physical quantities, several light meson masses, decay constant and the topological susceptibility. The model parameters are adopted even at very high cutoff scale compare to the hadronic scale to study the asymptotic behavior of the model. It is found that all the regularization methods except for the dimensional one actually lead reliable physical predictions for the kaon decay constant, sigma meson mass and topological susceptibility without restricting the ultra-violet cutoff below the hadronic scale.

  5. Higher spin black holes in three dimensions: Remarks on asymptotics and regularity

    NASA Astrophysics Data System (ADS)

    Bañados, Máximo; Canto, Rodrigo; Theisen, Stefan

    2016-07-01

    In the context of (2 +1 )-dimensional S L (N ,R )×S L (N ,R ) Chern-Simons theory we explore issues related to regularity and asymptotics on the solid torus, for stationary and circularly symmetric solutions. We display and solve all necessary conditions to ensure a regular metric and metriclike higher spin fields. We prove that holonomy conditions are necessary but not sufficient conditions to ensure regularity, and that Hawking conditions do not necessarily follow from them. Finally we give a general proof that once the chemical potentials are turned on—as demanded by regularity—the asymptotics cannot be that of Brown-Henneaux.

  6. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... advances without approval of the NCUA Board for a period of six months after becoming a member. This subsection shall not apply to any credit union which becomes a Regular member of the Facility within six... member of the Facility at any time within six months prior to becoming a Regular member of the Facility....

  7. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-04-01

    Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.

  8. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  9. On regularizations of the Dirac delta distribution

    NASA Astrophysics Data System (ADS)

    Hosseini, Bamdad; Nigam, Nilima; Stockie, John M.

    2016-01-01

    In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions SH to a singular term S as a parameter H (associated with the support size of SH) shrinks to zero. We characterize this convergence in both the weak-* topology of distributions and a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.

  10. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  11. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  12. Manifold regularized non-negative matrix factorization with label information

    NASA Astrophysics Data System (ADS)

    Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin

    2016-03-01

    Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.

  13. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    SciTech Connect

    Gao, J. M.; Liu, Y.; Li, W.; Lu, J.; Dong, Y. B.; Xia, Z. W.; Yi, P.; Yang, Q. W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.

  14. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A.

    PubMed

    Gao, J M; Liu, Y; Li, W; Lu, J; Dong, Y B; Xia, Z W; Yi, P; Yang, Q W

    2013-09-01

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges. PMID:24089825

  15. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  16. Wavelet regularization of the 2D incompressible Euler equations

    NASA Astrophysics Data System (ADS)

    Nguyen van Yen, Romain; Farge, Marie; Schneider, Kai

    2009-11-01

    We examine the viscosity dependence of the solutions of two-dimensional Navier-Stokes equations in periodic and wall-bounded domains, for Reynolds numbers varying from 10^3 to 10^7. We compare the Navier-Stokes solutions to those of the regularized two-dimensional Euler equations. The regularization is performed by applying at each time step the wavelet-based CVS filter (Farge et al., Phys. Fluids, 11, 1999), which splits turbulent fluctuations into coherent and incoherent contributions. We find that for Reynolds 10^5 the dissipation of coherent enstrophy tends to become independent of Reynolds, while the dissipation of total enstrophy decays to zero logarithmically with Reynolds. In the wall-bounded case, we observe an additional production of enstrophy at the wall. As a result, coherent enstrophy diverges when Reynolds tends to infinity, but its time derivative seems to remain bounded independently of Reynolds. This indicates that a balance may have been established between coherent enstrophy dissipation and coherent enstrophy production at the wall. The Reynolds number for which the dissipation of coherent enstrophy becomes independent on the Reynolds number is proposed to define the onset of the fully-turbulent regime.

  17. Regular black holes and noncommutative geometry inspired fuzzy sources

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinpei

    2016-05-01

    We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the space-time dimensions, and the existence of a void in the vicinity of the center of the space-time is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it.

  18. Oseledets Regularity Functions for Anosov Flows

    NASA Astrophysics Data System (ADS)

    Simić, Slobodan N.

    2011-07-01

    Oseledets regularity functions quantify the deviation of the growth associated with a dynamical system along its Lyapunov bundles from the corresponding uniform exponential growth. The precise degree of regularity of these functions is unknown. We show that for every invariant Lyapunov bundle of a volume preserving Anosov flow on a closed smooth Riemannian manifold, the corresponding Oseledets regularity functions are in L p ( m), for some p > 0, where m is the probability measure defined by the volume form. We prove an analogous result for essentially bounded cocycles over volume preserving Anosov flows.

  19. Analysis of regularized inversion of data corrupted by white Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-04-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is \\begin{eqnarray*} m(x) = Au(x) + \\delta \\varepsilon (x), \\end{eqnarray*} where δ > 0 is the noise magnitude. If ɛ was an L2-function, Tikhonov regularization gives an estimate \\begin{eqnarray*} T_\\alpha (m) = \\mathop {{arg\\, min}}_{u\\in H^r} \\big \\lbrace \\Vert A u-m\\Vert _{L^2}^2+ \\alpha \\Vert u\\Vert _{H^r}^2 \\big \\rbrace \\end{eqnarray*} for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm \\Vert u\\Vert _{H^r} covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L2, but do belong to Hs with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed.

  20. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  1. Regular Exercise May Boost Prostate Cancer Survival

    MedlinePlus

    ... nih.gov/medlineplus/news/fullstory_158374.html Regular Exercise May Boost Prostate Cancer Survival Study found that ... HealthDay News) -- Sticking to a moderate or intense exercise regimen may improve a man's odds of surviving ...

  2. Regular Exercise: Antidote for Deadly Diseases?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160326.html Regular Exercise: Antidote for Deadly Diseases? High levels of physical ... Aug. 9, 2016 (HealthDay News) -- Getting lots of exercise may reduce your risk for five common diseases, ...

  3. Nonminimal black holes with regular electric field

    NASA Astrophysics Data System (ADS)

    Balakin, Alexander B.; Zayats, Alexei E.

    2015-05-01

    We discuss the problem of identification of coupling constants, which describe interactions between photons and spacetime curvature, using exact regular solutions to the extended equations of the nonminimal Einstein-Maxwell theory. We argue the idea that three nonminimal coupling constants in this theory can be reduced to the single guiding parameter, which plays the role of nonminimal radius. We base our consideration on two examples of exact solutions obtained earlier in our works: the first of them describes a nonminimal spherically symmetric object (star or black hole) with regular radial electric field; the second example represents a nonminimal Dirac-type object (monopole or black hole) with regular metric. We demonstrate that one of the inflexion points of the regular metric function identifies a specific nonminimal radius, thus marking the domain of dominance of nonminimal interactions.

  4. Parallelization of irregularly coupled regular meshes

    NASA Technical Reports Server (NTRS)

    Chase, Craig; Crowley, Kay; Saltz, Joel; Reeves, Anthony

    1992-01-01

    Regular meshes are frequently used for modeling physical phenomena on both serial and parallel computers. One advantage of regular meshes is that efficient discretization schemes can be implemented in a straight forward manner. However, geometrically-complex objects, such as aircraft, cannot be easily described using a single regular mesh. Multiple interacting regular meshes are frequently used to describe complex geometries. Each mesh models a subregion of the physical domain. The meshes, or subdomains, can be processed in parallel, with periodic updates carried out to move information between the coupled meshes. In many cases, there are a relatively small number (one to a few dozen) subdomains, so that each subdomain may also be partitioned among several processors. We outline a composite run-time/compile-time approach for supporting these problems efficiently on distributed-memory machines. These methods are described in the context of a multiblock fluid dynamics problem developed at LaRC.

  5. Blind Poissonian images deconvolution with framelet regularization.

    PubMed

    Fang, Houzhang; Yan, Luxin; Liu, Hai; Chang, Yi

    2013-02-15

    We propose a maximum a posteriori blind Poissonian images deconvolution approach with framelet regularization for the image and total variation (TV) regularization for the point spread function. Compared with the TV based methods, our algorithm not only suppresses noise effectively but also recovers edges and detailed information. Moreover, the split Bregman method is exploited to solve the resulting minimization problem. Comparative results on both simulated and real images are reported. PMID:23455078

  6. Regularized CT reconstruction on unstructured grid

    NASA Astrophysics Data System (ADS)

    Chen, Yun; Lu, Yao; Ma, Xiangyuan; Xu, Yuesheng

    2016-04-01

    Computed tomography (CT) is an ill-posed problem. Reconstruction on unstructured grid reduces the computational cost and alleviates the ill-posedness by decreasing the dimension of the solution space. However, there was no systematic study on edge-preserving regularization methods for CT reconstruction on unstructured grid. In this work, we propose a novel regularization method for CT reconstruction on unstructured grid, such as triangular or tetrahedral meshes generated from the initial images reconstructed via analysis reconstruction method (e.g., filtered back-projection). The proposed regularization method is modeled as a three-term optimization problem, containing a weighted least square fidelity term motivated by the simultaneous algebraic reconstruction technique (SART). The related cost function contains two non-differentiable terms, which bring difficulty to the development of the fast solver. A fixed-point proximity algorithm with SART is developed for solving the related optimization problem, and accelerating the convergence. Finally, we compare the regularized CT reconstruction method to SART with different regularization methods. Numerical experiments demonstrated that the proposed regularization method on unstructured grid is effective to suppress noise and preserve edge features.

  7. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-01-01

    Breit, Gupta, and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the fifth-time of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, difficulties arise which, in general, ruins the scheme. A successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest.

  8. A maximal regularity estimate for the non-stationary Stokes equation in the strip

    NASA Astrophysics Data System (ADS)

    Choffrut, Antoine; Nobili, Camilla; Otto, Felix

    2016-04-01

    In a d-dimensional strip with d ≥ 2, we study the non-stationary Stokes equation with no-slip boundary condition in the lower and upper plates and periodic boundary condition in the horizontal directions. In this paper we establish a new maximal regularity estimate in the real interpolation norm

  9. k-Regular maps into Euclidean spaces and the Borsuk-Boltyanskii problem

    SciTech Connect

    Bogatyi, S A

    2002-02-28

    The Borsuk-Boltyanskii problem is solved for odd k, that is, the minimum dimension of a Euclidean space is determined into which any n-dimensional polyhedron (compactum) can be k-regularly embedded. A new lower bound is obtained for even k.

  10. Usual Source of Care in Preventive Service Use: A Regular Doctor versus a Regular Site

    PubMed Central

    Xu, K Tom

    2002-01-01

    Objective To compare the effects of having a regular doctor and having a regular site on five preventive services, controlling for the endogeneity of having a usual source of care. Data Source The Medical Expenditure Panel Survey 1996 conducted by the Agency for Healthcare Research and Quality and the National Center for Health Statistics. Study Design Mammograms, pap smears, blood pressure checkups, cholesterol level checkups, and flu shots were examined. A modified behavioral model framework was presented, which controlled for the endogeneity of having a usual source of care. Based on this framework, a two-equation empirical model was established to predict the probabilities of having a regular doctor and having a regular site, and use of each type of preventive service. Principal Findings Having a regular doctor was found to have a greater impact than having a regular site on discretional preventive services, such as blood pressure and cholesterol level checkups. No statistically significant differences were found between the effects a having a regular doctor and having a regular site on the use of flu shots, pap smears, and mammograms. Among the five preventive services, having a usual source of care had the greatest impact on cholesterol level checkups and pap smears. Conclusions Promoting a stable physician–patient relationship can improve patients’ timely receipt of clinical prevention. For certain preventive services, having a regular doctor is more effective than having a regular site. PMID:12546284

  11. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  12. Regular black holes in f (R ) gravity coupled to nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel E.; Junior, Ednaldo L. B.; Marques, Glauber T.; Zanchin, Vilson T.

    2016-07-01

    We obtain a class of regular black hole solutions in four-dimensional f (R ) gravity, R being the curvature scalar, coupled to a nonlinear electromagnetic source. The metric formalism is used and static spherically symmetric spacetimes are assumed. The resulting f (R ) and nonlinear electrodynamics functions are characterized by a one-parameter family of solutions which are generalizations of known regular black holes in general relativity coupled to nonlinear electrodynamics. The related regular black holes of general relativity are recovered when the free parameter vanishes, in which case one has f (R )∝R . We analyze the regularity of the solutions and also show that there are particular solutions that violate only the strong energy condition.

  13. Analysis of a Regularized Bingham Model with Pressure-Dependent Yield Stress

    NASA Astrophysics Data System (ADS)

    El Khouja, Nazek; Roquet, Nicolas; Cazacliu, Bogdan

    2015-12-01

    The goal of this article is to provide some essential results for the solution of a regularized viscoplastic frictional flow model adapted from the extensive mathematical analysis of the Bingham model. The Bingham model is a standard for the description of viscoplastic flows and it is widely used in many application areas. However, wet granular viscoplastic flows necessitate the introduction of additional non-linearities and coupling between velocity and stress fields. This article proposes a step toward a frictional coupling, characterized by a dependence of the yield stress to the pressure field. A regularized version of this viscoplastic frictional model is analysed in the framework of stationary flows. Existence, uniqueness and regularity are investigated, as well as finite-dimensional and algorithmic approximations. It is shown that the model can be solved and approximated as far as a frictional parameter is small enough. Getting similar results for the non-regularized model remains an issue. Numerical investigations are postponed to further works.

  14. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  15. Oxygen saturation resolution influences regularity measurements.

    PubMed

    Garde, Ainara; Karlen, Walter; Dehkordi, Parastoo; Ansermino, J Mark; Dumont, Guy A

    2014-01-01

    The measurement of regularity in the oxygen saturation (SpO(2)) signal has been suggested for use in identifying subjects with sleep disordered breathing (SDB). Previous work has shown that children with SDB have lower SpO(2) regularity than subjects without SDB (NonSDB). Regularity was measured using non-linear methods like approximate entropy (ApEn), sample entropy (SamEn) and Lempel-Ziv (LZ) complexity. Different manufacturer's pulse oximeters provide SpO(2) at various resolutions and the effect of this resolution difference on SpO(2) regularity, has not been studied. To investigate this effect, we used the SpO(2) signal of children with and without SDB, recorded from the Phone Oximeter (0.1% resolution) and the same SpO(2) signal rounded to the nearest integer (artificial 1% resolution). To further validate the effect of rounding, we also used the SpO(2) signal (1% resolution) recorded simultaneously from polysomnography (PSG), as a control signal. We estimated SpO(2) regularity by computing the ApEn, SamEn and LZ complexity, using a 5-min sliding window and showed that different resolutions provided significantly different results. The regularity calculated using 0.1% SpO(2) resolution provided no significant differences between SDB and NonSDB. However, the artificial 1% resolution SpO(2) provided significant differences between SDB and NonSDB, showing a more random SpO(2) pattern (lower SpO(2) regularity) in SDB children, as suggested in the past. Similar results were obtained with the SpO(2) recorded from PSG (1% resolution), which further validated that this SpO(2) regularity change was due to the rounding effect. Therefore, the SpO(2) resolution has a great influence in regularity measurements like ApEn, SamEn and LZ complexity that should be considered when studying the SpO(2) pattern in children with SDB. PMID:25570437

  16. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  17. Assessment of regularization techniques for electrocardiographic imaging

    PubMed Central

    Milanič, Matija; Jazbinšek, Vojko; MacLeod, Robert S.; Brooks, Dana H.; Hren, Rok

    2014-01-01

    A widely used approach to solving the inverse problem in electrocardiography involves computing potentials on the epicardium from measured electrocardiograms (ECGs) on the torso surface. The main challenge of solving this electrocardiographic imaging (ECGI) problem lies in its intrinsic ill-posedness. While many regularization techniques have been developed to control wild oscillations of the solution, the choice of proper regularization methods for obtaining clinically acceptable solutions is still a subject of ongoing research. However there has been little rigorous comparison across methods proposed by different groups. This study systematically compared various regularization techniques for solving the ECGI problem under a unified simulation framework, consisting of both 1) progressively more complex idealized source models (from single dipole to triplet of dipoles), and 2) an electrolytic human torso tank containing a live canine heart, with the cardiac source being modeled by potentials measured on a cylindrical cage placed around the heart. We tested 13 different regularization techniques to solve the inverse problem of recovering epicardial potentials, and found that non-quadratic methods (total variation algorithms) and first-order and second-order Tikhonov regularizations outperformed other methodologies and resulted in similar average reconstruction errors. PMID:24369741

  18. Shadow of rotating regular black holes

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.

    2016-05-01

    We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.

  19. Strong regularizing effect of integrable systems

    SciTech Connect

    Zhou, Xin

    1997-11-01

    Many time evolution problems have the so-called strong regularization effect, that is, with any irregular initial data, as soon as becomes greater than 0, the solution becomes C{sup {infinity}} for both spacial and temporal variables. This paper studies 1 x 1 dimension integrable systems for such regularizing effect. In the work by Sachs, Kappler [S][K], (see also earlier works [KFJ] and [Ka]), strong regularizing effect is proved for KdV with rapidly decaying irregular initial data, using the inverse scattering method. There are two equivalent Gel`fand-Levitan-Marchenko (GLM) equations associated to an inverse scattering problem, one is normalized at x = {infinity} and another at x = {infinity}. The method of [S][K] relies on the fact that the KdV waves propagate only in one direction and therefore one of the two GLM equations remains normalized and can be differentiated infinitely many times. 15 refs.

  20. Regularized image recovery in scattering media.

    PubMed

    Schechner, Yoav Y; Averbuch, Yuval

    2007-09-01

    When imaging in scattering media, visibility degrades as objects become more distant. Visibility can be significantly restored by computer vision methods that account for physical processes occurring during image formation. Nevertheless, such recovery is prone to noise amplification in pixels corresponding to distant objects, where the medium transmittance is low. We present an adaptive filtering approach that counters the above problems: while significantly improving visibility relative to raw images, it inhibits noise amplification. Essentially, the recovery formulation is regularized, where the regularization adapts to the spatially varying medium transmittance. Thus, this regularization does not blur close objects. We demonstrate the approach in atmospheric and underwater experiments, based on an automatic method for determining the medium transmittance. PMID:17627052

  1. [Why regular physical activity favors longevity].

    PubMed

    Pentimone, F; Del Corso, L

    1998-06-01

    Regular physical exercise is useful at all ages. In the elderly, even a gentle exercise programme consisting of walking, bicycling, playing golf if performed constantly increases longevity by preventing the onset of the main diseases or alleviating the handicaps they may have caused. Cardiovascular diseases, which represent the main cause of death in the elderly, and osteoporosis, a disabling disease potentially capable of shortening life expectancy, benefit from physical exercise which if performed regularly well before the start of old age may help to prevent them. Over the past few years there has been growing evidence of the concrete protection offered against neoplasia and even the ageing process itself. PMID:9739351

  2. Learning with regularizers in multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Saad, David; Rattray, Magnus

    1998-02-01

    We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units that may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

  3. Demosaicing as the problem of regularization

    NASA Astrophysics Data System (ADS)

    Kunina, Irina; Volkov, Aleksey; Gladilin, Sergey; Nikolaev, Dmitry

    2015-12-01

    Demosaicing is the process of reconstruction of a full-color image from Bayer mosaic, which is used in digital cameras for image formation. This problem is usually considered as an interpolation problem. In this paper, we propose to consider the demosaicing problem as a problem of solving an underdetermined system of algebraic equations using regularization methods. We consider regularization with standard l1/2-, l1 -, l2- norms and their effect on quality image reconstruction. The experimental results showed that the proposed technique can both be used in existing methods and become the base for new ones

  4. REGULAR VERSUS DIFFUSIVE PHOTOSPHERIC FLUX CANCELLATION

    SciTech Connect

    Litvinenko, Yuri E.

    2011-04-20

    Observations of photospheric flux cancellation on the Sun imply that cancellation can be a diffusive rather than regular process. A criterion is derived, which quantifies the parameter range in which diffusive photospheric cancellation should occur. Numerical estimates show that regular cancellation models should be expected to give a quantitatively accurate description of photospheric cancellation. The estimates rely on a recently suggested scaling for a turbulent magnetic diffusivity, which is consistent with the diffusivity measurements on spatial scales varying by almost two orders of magnitude. Application of the turbulent diffusivity to large-scale dispersal of the photospheric magnetic flux is discussed.

  5. Regularized Data Assimilation and Fusion of non-Gaussian States Exhibiting Sparse Prior in Transform Domains

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2012-12-01

    Improved estimation of geophysical state variables in a noisy environment from down-sampled observations and background model forecasts has been the subject of growing research in the past decades. Often the number of degrees of freedom in high-dimensional non-Gaussian natural states is quite small compared to their ambient dimensionality, a property often revealed as a sparse representation in an appropriately chosen domain. Aiming to increase the hydrometeorological forecast skill and motivated by wavelet-domain sparsity of some land-surface geophysical states, new framework is presented that recast the classical variational data assimilation/fusion (DA/DF) problem via L_1 regularization in the wavelet domain. Our results suggest that proper regularization can lead to more accurate recovery of a wide range of smooth/non-smooth geophysical states exhibiting remarkable non-Gaussian features. The promise of the proposed framework is demonstrated in multi-sensor satellite and land-based precipitation data fusion, while the regularized DA is performed on the heat equation in a 4D-VAR context, using sparse regularization in the wavelet domain.; ; Top panel: Noisy observations of the linear advection diffusion equation at five consecutive snapshots, middle panel: Classical 4D-VAR and bottom panel: l_1 regularized 4D-VAR with improved results.

  6. A new regularity-based algorithm for characterizing heterogeneities from digitized core image

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Zaourar, Naima; Hachay, Olga

    2014-05-01

    The two-dimensional multifractional Brownian motion (2D-mBm) is receiving an increasing interest in image processing. However, one difficulty inherent to this fractal model is the estimation of its local Hölderian regularity function. In this paper, we suggest a new estimator of the local Hölder exponent of 2D-mBm paths. The suggested algorithm has been first tested on synthetic 2D-mBm paths, then implemented on digitized image data of a core extracted from an Algerian borehole. The obtained regularity map shows a clear correlation with the geological features observed on the investigated core. These lithological discontinuities are reflected by local variations of the Hölder exponent value. However, no clear relationship can be drawn between regularity and digitized data. To conclude, the suggested algorithm may be a powerful tool for exploring heterogeneities from core images using the regularity exponents. Keywords: core image, two-dimensional multifractional Brownian motion, fractal, regularity.

  7. Regularity for steady periodic capillary water waves with vorticity.

    PubMed

    Henry, David

    2012-04-13

    In the following, we prove new regularity results for two-dimensional steady periodic capillary water waves with vorticity, in the absence of stagnation points. Firstly, we prove that if the vorticity function has a Hölder-continuous first derivative, then the free surface is a smooth curve and the streamlines beneath the surface will be real analytic. Furthermore, once we assume that the vorticity function is real analytic, it will follow that the wave surface profile is itself also analytic. A particular case of this result includes irrotational fluid flow where the vorticity is zero. The property of the streamlines being analytic allows us to gain physical insight into small-amplitude waves by justifying a power-series approach. PMID:22393112

  8. Uncorrelated regularized local Fisher discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Zhan; Ruan, Qiuqi; An, Gaoyun

    2014-07-01

    A local Fisher discriminant analysis can work well for a multimodal problem. However, it often suffers from the undersampled problem, which makes the local within-class scatter matrix singular. We develop a supervised discriminant analysis technique called uncorrelated regularized local Fisher discriminant analysis for image feature extraction. In this technique, the local within-class scatter matrix is approximated by a full-rank matrix that not only solves the undersampled problem but also eliminates the poor impact of small and zero eigenvalues. Statistically uncorrelated features are obtained to remove redundancy. A trace ratio criterion and the corresponding iterative algorithm are employed to globally solve the objective function. Experimental results on four famous face databases indicate that our proposed method is effective and outperforms the conventional dimensionality reduction methods.

  9. The effect of regularization on the reconstruction of ACAR data

    NASA Astrophysics Data System (ADS)

    Weber, J. A.; Ceeh, H.; Hugenschmidt, C.; Leitner, M.; Böni, P.

    2014-04-01

    The Fermi surface, i.e. the two-dimensional surface separating occupied and unoccupied states in k-space, is the defining property of a metal. Full information about its shape is mandatory for identifying nesting vectors or for validating band structure calculations. With the angular correlation of positron-electron annihilation radiation (ACAR) it is easy to get projections of the Fermi surface. Nevertheless it is claimed to be inexact compared to more common methods like the determination based on quantum oscillations or angle-resolved photoemission spectroscopy. In this article we will present a method for reconstructing the Fermi surface from projections with statistically correct data treatment which is able to increase accuracy by introducing different types of regularization.

  10. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping. PMID:24976795

  11. Regularizing the divergent structure of light-front currents

    SciTech Connect

    Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2001-04-01

    The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.

  12. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  13. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  14. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  15. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... the credit union's paid-in and unimpaired capital and surplus, as determined in accordance with §...

  16. Commitment and Dependence Upon Regular Running.

    ERIC Educational Resources Information Center

    Sachs, Michael L.; Pargman, David

    The linear relationship between intellectual commitment to running and psychobiological dependence upon running is examined. A sample of 540 regular runners (running frequency greater than three days per week for the past year for the majority) was surveyed with a questionnaire. Measures of commitment and dependence on running, as well as…

  17. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING.

    PubMed

    Liu, Meizhu; Vemuri, Baba C

    2011-03-30

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) - used to represent the distribution over the training data and the classification error respectively - to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  18. Generalisation of Regular and Irregular Morphological Patterns.

    ERIC Educational Resources Information Center

    Prasada, Sandeep; and Pinker, Steven

    1993-01-01

    When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call…

  19. Observing Special and Regular Education Classrooms.

    ERIC Educational Resources Information Center

    Hersh, Susan B.

    The paper describes an observation instrument originally developed as a research tool to assess both the special setting and the regular classroom. The instrument can also be used in determining appropriate placement for students with learning disabilities and for programming the transfer of skills learned in the special setting to the regular…

  20. Starting flow in regular polygonal ducts

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.

    2016-06-01

    The starting flows in regular polygonal ducts of S = 3, 4, 5, 6, 8 sides are determined by the method of eigenfunction superposition. The necessary S-fold symmetric eigenfunctions and eigenvalues of the Helmholtz equation are found either exactly or by boundary point match. The results show the starting time is governed by the first eigenvalue.

  1. 28 CFR 540.44 - Regular visitors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PERSONS IN THE COMMUNITY Visiting Regulations § 540.44 Regular visitors. An inmate desiring to have... ordinarily will be extended to friends and associates having an established relationship with the inmate... of the institution. Exceptions to the prior relationship rule may be made, particularly for...

  2. Regular Classroom Teachers' Perceptions of Mainstreaming Effects.

    ERIC Educational Resources Information Center

    Ringlaben, Ravic P.; Price, Jay R.

    To assess regular classroom teachers' perceptions of mainstreaming, a 22 item questionnaire was completed by 117 teachers (K through 12). Among results were that nearly half of the Ss indicated a lack of preparation for implementing mainstreaming; 47% tended to be very willing to accept aminstreamed students; 42% said mainstreaming was working…

  3. Regularizing cosmological singularities by varying physical constants

    SciTech Connect

    Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl

    2013-02-01

    Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.

  4. Exploring the structural regularities in networks

    NASA Astrophysics Data System (ADS)

    Shen, Hua-Wei; Cheng, Xue-Qi; Guo, Jia-Feng

    2011-11-01

    In this paper, we consider the problem of exploring structural regularities of networks by dividing the nodes of a network into groups such that the members of each group have similar patterns of connections to other groups. Specifically, we propose a general statistical model to describe network structure. In this model, a group is viewed as a hidden or unobserved quantity and it is learned by fitting the observed network data using the expectation-maximization algorithm. Compared with existing models, the most prominent strength of our model is the high flexibility. This strength enables it to possess the advantages of existing models and to overcome their shortcomings in a unified way. As a result, not only can broad types of structure be detected without prior knowledge of the type of intrinsic regularities existing in the target network, but also the type of identified structure can be directly learned from the network. Moreover, by differentiating outgoing edges from incoming edges, our model can detect several types of structural regularities beyond competing models. Tests on a number of real world and artificial networks demonstrate that our model outperforms the state-of-the-art model in shedding light on the structural regularities of networks, including the overlapping community structure, multipartite structure, and several other types of structure, which are beyond the capability of existing models.

  5. Dyslexia in Regular Orthographies: Manifestation and Causation

    ERIC Educational Resources Information Center

    Wimmer, Heinz; Schurz, Matthias

    2010-01-01

    This article summarizes our research on the manifestation of dyslexia in German and on cognitive deficits, which may account for the severe reading speed deficit and the poor orthographic spelling performance that characterize dyslexia in regular orthographies. An only limited causal role of phonological deficits (phonological awareness,…

  6. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  7. Fast Image Reconstruction with L2-Regularization

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar

    2014-01-01

    Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184

  8. Handicapped Children in the Regular Classroom.

    ERIC Educational Resources Information Center

    Fountain Valley School District, CA.

    Reported was a project in which 60 educable mentally retarded (EMR) and 30 educationally handicapped (EH) elementary school students were placed in regular classrooms to determine whether they could be effectively educated in those settings. Effective education was defined in terms of improvement in reading, mathematics, student and teacher…

  9. Spectral analysis of two-dimensional Bose-Hubbard models

    NASA Astrophysics Data System (ADS)

    Fischer, David; Hoffmann, Darius; Wimberger, Sandro

    2016-04-01

    One-dimensional Bose-Hubbard models are well known to obey a transition from regular to quantum-chaotic spectral statistics. We are extending this concept to relatively simple two-dimensional many-body models. Also in two dimensions a transition from regular to chaotic spectral statistics is found and discussed. In particular, we analyze the dependence of the spectral properties on the bond number of the two-dimensional lattices and the applied boundary conditions. For maximal connectivity, the systems behave most regularly in agreement with the applicability of mean-field approaches in the limit of many nearest-neighbor couplings at each site.

  10. Functional calculus and *-regularity of a class of Banach algebras II

    NASA Astrophysics Data System (ADS)

    Leung, Chi-Wai; Ng, Chi-Keung

    2006-10-01

    In this article, we define a natural Banach *-algebra for a C*-dynamical system (A,G,[alpha]) which is slightly bigger than L1(G;A) (they are the same if A is finite-dimensional). We will show that this algebra is *-regular if G has polynomial growth. The main result in this article extends the two main results in [C.W. Leung, C.K. Ng, Functional calculus and *-regularity of a class of Banach algebras, Proc. Amer. Math. Soc., in press].

  11. The geometric β-function in curved space-time under operator regularization

    SciTech Connect

    Agarwala, Susama

    2015-06-15

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined.

  12. Learning regular expressions for clinical text classification

    PubMed Central

    Bui, Duy Duc An; Zeng-Treitler, Qing

    2014-01-01

    Objectives Natural language processing (NLP) applications typically use regular expressions that have been developed manually by human experts. Our goal is to automate both the creation and utilization of regular expressions in text classification. Methods We designed a novel regular expression discovery (RED) algorithm and implemented two text classifiers based on RED. The RED+ALIGN classifier combines RED with an alignment algorithm, and RED+SVM combines RED with a support vector machine (SVM) classifier. Two clinical datasets were used for testing and evaluation: the SMOKE dataset, containing 1091 text snippets describing smoking status; and the PAIN dataset, containing 702 snippets describing pain status. We performed 10-fold cross-validation to calculate accuracy, precision, recall, and F-measure metrics. In the evaluation, an SVM classifier was trained as the control. Results The two RED classifiers achieved 80.9–83.0% in overall accuracy on the two datasets, which is 1.3–3% higher than SVM's accuracy (p<0.001). Similarly, small but consistent improvements have been observed in precision, recall, and F-measure when RED classifiers are compared with SVM alone. More significantly, RED+ALIGN correctly classified many instances that were misclassified by the SVM classifier (8.1–10.3% of the total instances and 43.8–53.0% of SVM's misclassifications). Conclusions Machine-generated regular expressions can be effectively used in clinical text classification. The regular expression-based classifier can be combined with other classifiers, like SVM, to improve classification performance. PMID:24578357

  13. Maximum-likelihood constrained regularized algorithms: an objective criterion for the determination of regularization parameters

    NASA Astrophysics Data System (ADS)

    Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude

    1999-12-01

    We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).

  14. A comprehensive methodology for algorithm characterization, regularization and mapping into optimal VLSI arrays

    SciTech Connect

    Barada, H.R.

    1989-01-01

    This dissertation provides a fairly comprehensive treatment of a broad class of algorithms as it pertains to systolic implementation. The authors describe some formal algorithmic transformations that can be utilized to map regular and some irregular compute-bound algorithms into the beat fit time-optimal systolic architectures. The resulted architectures can be one-dimensional, two-dimensional, three-dimensional or nonplanar. The methodology detailed in the dissertation employs, like other methods, the concept of dependence vector to order, in space and time, the index points representing the algorithm. However, by differentiating between two types of dependence vectors, the ordering procedure is allowed to be flexible and time optimal. Furthermore, unlike other methodologies, the approach reported here does not put constraints on the topology or dimensionality of the target architecture. The ordered index points are represented by nodes in a diagram called Systolic Precedence Diagram (SPD). The SPD is a form of precedence graph that takes into account the systolic operation requirements of strictly local communications and regular data flow. Therefore, any algorithm with variable dependence vectors has to be transformed into a regular indexed set of computations with local dependencies. This can be done by replacing variable dependence vectors with sets of fixed dependence vectors. The SPD is transformed into an acyclic, labeled, directed graph called the Systolic Directed Graph (SDG). The SDG models the data flow as well as the timing for the execution of the given algorithm on a time-optimal array.

  15. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  16. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  17. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  18. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  19. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  20. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  1. Generalized Higher Degree Total Variation (HDTV) Regularization

    PubMed Central

    Hu, Yue; Ongie, Greg; Ramani, Sathish; Jacob, Mathews

    2015-01-01

    We introduce a family of novel image regularization penalties called generalized higher degree total variation (HDTV). These penalties further extend our previously introduced HDTV penalties, which generalize the popular total variation (TV) penalty to incorporate higher degree image derivatives. We show that many of the proposed second degree extensions of TV are special cases or are closely approximated by a generalized HDTV penalty. Additionally, we propose a novel fast alternating minimization algorithm for solving image recovery problems with HDTV and generalized HDTV regularization. The new algorithm enjoys a ten-fold speed up compared to the iteratively reweighted majorize minimize algorithm proposed in a previous work. Numerical experiments on 3D magnetic resonance images and 3D microscopy images show that HDTV and generalized HDTV improve the image quality significantly compared with TV. PMID:24710832

  2. Charged fermions tunneling from regular black holes

    SciTech Connect

    Sharif, M. Javed, W.

    2012-11-15

    We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.

  3. A regular version of Smilansky model

    SciTech Connect

    Barseghyan, Diana; Exner, Pavel

    2014-04-15

    We discuss a modification of Smilansky model in which a singular potential “channel” is replaced by a regular, below unbounded potential which shrinks as it becomes deeper. We demonstrate that, similarly to the original model, such a system exhibits a spectral transition with respect to the coupling constant, and determine the critical value above which a new spectral branch opens. The result is generalized to situations with multiple potential “channels.”.

  4. A regularization approach to hydrofacies delineation

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  5. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  6. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  7. Regularization Parameter Selections via Generalized Information Criterion

    PubMed Central

    Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling

    2009-01-01

    We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354

  8. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  9. Regular language constrained sequence alignment revisited.

    PubMed

    Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal

    2011-05-01

    Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, the Regular Expression Constrained Sequence Alignment Problem was introduced, which proposed an O(n²t⁴) time and O(n²t²) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the input non-deterministic automaton. A faster O(n²t³) time algorithm for the same problem was subsequently proposed. In this article, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n²t³)/log t). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense. PMID:21554020

  10. Regular surface layer of Azotobacter vinelandii.

    PubMed Central

    Bingle, W H; Doran, J L; Page, W J

    1984-01-01

    Washing Azotobacter vinelandii UW1 with Burk buffer or heating cells at 42 degrees C exposed a regular surface layer which was effectively visualized by freeze-etch electron microscopy. This layer was composed of tetragonally arranged subunits separated by a center-to-center spacing of approximately 10 nm. Cells washed with distilled water to remove an acidic major outer membrane protein with a molecular weight of 65,000 did not possess the regular surface layer. This protein, designated the S protein, specifically reattached to the surface of distilled-water-washed cells in the presence of the divalent calcium, magnesium, strontium, or beryllium cations. All of these cations except beryllium supported reassembly of the S protein into a regular tetragonal array. Although the surface localization of the S protein has been demonstrated, radioiodination of exposed envelope proteins in whole cells did not confirm this. The labeling behavior of the S protein could be explained on the basis of varying accessibilities of different tyrosine residues to iodination. Images PMID:6735982

  11. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  12. Automatic detection of regularly repeating vocalizations

    NASA Astrophysics Data System (ADS)

    Mellinger, David

    2005-09-01

    Many animal species produce repetitive sounds at regular intervals. This regularity can be used for automatic recognition of the sounds, providing improved detection at a given signal-to-noise ratio. Here, the detection of sperm whale sounds is examined. Sperm whales produce highly repetitive ``regular clicks'' at periods of about 0.2-2 s, and faster click trains in certain behavioral contexts. The following detection procedure was tested: a spectrogram was computed; values within a certain frequency band were summed; time windowing was applied; each windowed segment was autocorrelated; and the maximum of the autocorrelation within a certain periodicity range was chosen. This procedure was tested on sets of recordings containing sperm whale sounds and interfering sounds, both low-frequency recordings from autonomous hydrophones and high-frequency ones from towed hydrophone arrays. An optimization procedure iteratively varies detection parameters (spectrogram frame length and frequency range, window length, periodicity range, etc.). Performance of various sets of parameters was measured by setting a standard level of allowable missed calls, and the resulting optimium parameters are described. Performance is also compared to that of a neural network trained using the data sets. The method is also demonstrated for sounds of blue whales, minke whales, and seismic airguns. [Funding from ONR.

  13. Sparsity regularization for parameter identification problems

    NASA Astrophysics Data System (ADS)

    Jin, Bangti; Maass, Peter

    2012-12-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓp-penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓp sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  14. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  15. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  16. Quaternion regularization and trajectory motion control in celestial mechanics and astrodynamics: II

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    2014-07-01

    Problems of regularization in celestial mechanics and astrodynamics are considered, and basic regular quaternion models for celestial mechanics and astrodynamics are presented. It is shown that the effectiveness of analytical studies and numerical solutions to boundary value problems of controlling the trajectory motion of spacecraft can be improved by using quaternion models of astrodynamics. In this second part of the paper, specific singularity-type features (division by zero) are considered. They result from using classical equations in angular variables (particularly in Euler variables) in celestial mechanics and astrodynamics and can be eliminated by using Euler (Rodrigues-Hamilton) parameters and Hamilton quaternions. Basic regular (in the above sense) quaternion models of celestial mechanics and astrodynamics are considered; these include equations of trajectory motion written in nonholonomic, orbital, and ideal moving trihedrals whose rotational motions are described by Euler parameters and quaternions of turn; and quaternion equations of instantaneous orbit orientation of a celestial body (spacecraft). New quaternion regular equations are derived for the perturbed three-dimensional two-body problem (spacecraft trajectory motion). These equations are constructed using ideal rectangular Hansen coordinates and quaternion variables, and they have additional advantages over those known for regular Kustaanheimo-Stiefel equations.

  17. The regular state in higher order gravity

    NASA Astrophysics Data System (ADS)

    Cotsakis, Spiros; Kadry, Seifedine; Trachilis, Dimitrios

    2016-08-01

    We consider the higher-order gravity theory derived from the quadratic Lagrangian R + 𝜖R2 in vacuum as a first-order (ADM-type) system with constraints, and build time developments of solutions of an initial value formulation of the theory. We show that all such solutions, if analytic, contain the right number of free functions to qualify as general solutions of the theory. We further show that any regular analytic solution which satisfies the constraints and the evolution equations can be given in the form of an asymptotic formal power series expansion.

  18. Regularization ambiguities in loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Perez, Alejandro

    2006-02-01

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem—the existence of well-behaved regularization of the constraints—is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant “point-splitting” regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions—due to the difficulties associated to the definition of the physical inner product—it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we

  19. Total-variation regularization with bound constraints

    SciTech Connect

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  20. Multichannel image regularization using anisotropic geodesic filtering

    SciTech Connect

    Grazzini, Jacopo A

    2010-01-01

    This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.

  1. Promoting regular physical activity in pulmonary rehabilitation.

    PubMed

    Garcia-Aymerich, Judith; Pitta, Fabio

    2014-06-01

    Patients with chronic respiratory diseases are usually physically inactive, which is an important negative prognostic factor. Therefore, promoting regular physical activity is of key importance in reducing morbidity and mortality and improving the quality of life in this population. A current challenge to pulmonary rehabilitation is the need to develop strategies that induce or facilitate the enhancement of daily levels of physical activity. Because exercise training alone, despite improving exercise capacity, does not consistently generate similar improvements in physical activity in daily life, there is also a need to develop behavioral interventions that help to promote activity. PMID:24874131

  2. Regularized Grad equations for multicomponent plasmas

    NASA Astrophysics Data System (ADS)

    Magin, Thierry E.; Martins, Gérald; Torrilhon, Manuel

    2011-05-01

    The moment method of Grad is used to derive macroscopic conservation equations for multicomponent plasmas for small and moderate Knudsen numbers, accounting for the electromagnetic field influence and thermal nonequilibrium. In the low Knudsen number limit, the equations derived are fully consistent with those obtained by means of the Chapman-Enskog method. In particular, we have retieved the Kolesnikov effect coupling electrons and heavy particles in the case of the Boltzmann moment systems. Finally, a regularization procedure is proposed to achieve continuous shock structures at all Mach numbers.

  3. Spectral action with zeta function regularization

    NASA Astrophysics Data System (ADS)

    Kurkov, Maxim A.; Lizzi, Fedele; Sakellariadou, Mairi; Watcharangkool, Apimook

    2015-03-01

    In this paper we propose a novel definition of the bosonic spectral action using zeta function regularization, in order to address the issues of renormalizability and spectral dimensions. We compare the zeta spectral action with the usual (cutoff-based) spectral action and discuss its origin and predictive power, stressing the importance of the issue of the three dimensionful fundamental constants, namely the cosmological constant, the Higgs vacuum expectation value, and the gravitational constant. We emphasize the fundamental role of the neutrino Majorana mass term for the structure of the bosonic action.

  4. Dense Regular Packings of Irregular Nonconvex Particles

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; van Roij, René; Dijkstra, Marjolein

    2011-10-01

    We present a new numerical scheme to study systems of nonconvex, irregular, and punctured particles in an efficient manner. We employ this method to analyze regular packings of odd-shaped bodies, both from a nanoparticle and from a computational geometry perspective. Besides determining close-packed structures for 17 irregular shapes, we confirm several conjectures for the packings of a large set of 142 convex polyhedra and extend upon these. We also prove that we have obtained the densest packing for both rhombicuboctahedra and rhombic enneacontrahedra and we have improved upon the packing of enneagons and truncated tetrahedra.

  5. Accretion onto some well-known regular black holes

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  6. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  7. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  8. Nonlinear regularization techniques for seismic tomography

    SciTech Connect

    Loris, I. Douma, H.; Nolet, G.; Regone, C.

    2010-02-01

    The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, l{sub 2} penalties are compared to so-called sparsity promoting l{sub 1} and l{sub 0} penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an l{sub 2} norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer l{sub 1} damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple l{sub 2} minimization ('Tikhonov regularization') which should be avoided. In some of our examples, the l{sub 0} method produced notable artifacts. In addition we show how nonlinear l{sub 1} methods for finding sparse models can be competitive in speed with the widely used l{sub 2} methods, certainly under noisy conditions, so that there is no need to shun l{sub 1} penalizations.

  9. Tomographic laser absorption spectroscopy using Tikhonov regularization.

    PubMed

    Guha, Avishek; Schoegl, Ingmar

    2014-12-01

    The application of tunable diode laser absorption spectroscopy (TDLAS) to flames with nonhomogeneous temperature and concentration fields is an area where only few studies exist. Experimental work explores the performance of tomographic reconstructions of species concentration and temperature profiles from wavelength-modulated TDLAS measurements within the plume of an axisymmetric McKenna burner. Water vapor transitions at 1391.67 and 1442.67 nm are probed using calibration-free wavelength modulation spectroscopy with second harmonic detection (WMS-2f). A single collimated laser beam is swept parallel to the burner surface, where scans yield pairs of line-of-sight (LOS) data at multiple radial locations. Radial profiles of absorption data are reconstructed using Tikhonov regularized Abel inversion, which suppresses the amplification of experimental noise that is typically observed for reconstructions with high spatial resolution. Based on spectral data reconstructions, temperatures and mole fractions are calculated point-by-point. Here, a least-squares approach addresses difficulties due to modulation depths that cannot be universally optimized due to a nonuniform domain. Experimental results show successful reconstructions of temperature and mole fraction profiles based on two-transition, nonoptimally modulated WMS-2f and Tikhonov regularized Abel inversion, and thus validate the technique as a viable diagnostic tool for flame measurements. PMID:25607968

  10. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  11. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  12. Explicit solutions of one-dimensional total variation problem

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2015-09-01

    This work deals with denosing of a one-dimensional signal corrupted by additive white Gaussian noise. A common way to solve the problem is to utilize the total variation (TV) method. Basically, the TV regularization minimizes a functional consisting of the sum of fidelity and regularization terms. We derive explicit solutions of the one-dimensional TV regularization problem that help us to restore noisy signals with a direct, non-iterative algorithm. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of noisy signals.

  13. The Essential Special Education Guide for the Regular Education Teacher

    ERIC Educational Resources Information Center

    Burns, Edward

    2007-01-01

    The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…

  14. Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2016-01-01

    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…

  15. The Regular Education Initiative: Patent Medicine for Behavioral Disorders.

    ERIC Educational Resources Information Center

    Braaten, Sheldon; And Others

    1988-01-01

    Implications of the regular education initiative for behaviorally disordered students are examined in the context of integration and right to treatment. These students are underserved, often cannot be appropriately served in regular classrooms, are not welcomed by most regular classroom teachers, and have treatment rights the initiative does not…

  16. On Regularity Criteria for the 2D Generalized MHD System

    NASA Astrophysics Data System (ADS)

    Jiang, Zaihong; Wang, Yanan; Zhou, Yong

    2016-06-01

    This paper deals with the problem of regularity criteria for the 2D generalized MHD system with fractional dissipative terms {-Λ^{2α}u} for the velocity field and {-Λ^{2β}b} for the magnetic field respectively. Various regularity criteria are established to guarantee smoothness of solutions. It turns out that our regularity criteria imply previous global existence results naturally.

  17. 29 CFR 778.408 - The specified regular rate.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... regular rate. (a) To qualify under section 7(f), the contract must specify “a regular rate of pay of not... section 7(f), must specify a “regular rate,” indicates that this criterion of these two cases is...

  18. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  19. 39 CFR 6.1 - Regular meetings, annual meeting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a...

  20. Selected Characteristics, Classified & Unclassified (Regular) Students; Community Colleges, Fall 1978.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. Community Coll. System.

    Fall 1978 enrollment data for Hawaii's community colleges and data on selected characteristics of students enrolled in regular credit programs are presented. Of the 27,880 registrants, 74% were regular students, 1% were early admittees, 6% were registered in non-credit apprenticeship programs, and 18% were in special programs. Regular student…

  1. Local orientational mobility in regular hyperbranched polymers

    NASA Astrophysics Data System (ADS)

    Dolgushev, Maxim; Markelov, Denis A.; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  2. Generalized equations of state and regular universes

    NASA Astrophysics Data System (ADS)

    Contreras, F.; Cruz, N.; González, E.

    2016-05-01

    We found non singular solutions for universes filled with a fluid which obey a Generalized Equation of State of the form P(ρ) = – Aρ + γρλ. An emergent universe is obtained if A =1 and λ = 1/2. If the matter source is reinterpret as that of a scalar matter field with some potential, the corresponding potential is derived. For a closed universe, an exact bounce solution is found for A = 1/3 and the same λ. We also explore how the composition of theses universes ean be interpreted in terms of known fluids. It is of interest to note that accelerated solutions previously found for the late time evolution also represent regular solutions at early times.

  3. Dyslexia in regular orthographies: manifestation and causation.

    PubMed

    Wimmer, Heinz; Schurz, Matthias

    2010-11-01

    This article summarizes our research on the manifestation of dyslexia in German and on cognitive deficits, which may account for the severe reading speed deficit and the poor orthographic spelling performance that characterize dyslexia in regular orthographies. An only limited causal role of phonological deficits (phonological awareness, phonological STM, and rapid naming) for the emergence of reading fluency and spelling deficits is inferred from two large longitudinal studies with assessments of phonology before learning to read. A review of our cross-sectional studies provides no support for several cognitive deficits (visual-attention deficit, magnocellular dysfunction, skill automatization deficit, and visual-sequential memory deficit), which were proposed as alternatives to the phonological deficit account. Finally, a revised version of the phonological deficit account in terms of a dysfunction in orthographic-phonological connectivity is proposed. PMID:20957684

  4. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes. PMID:27575171

  5. Black hole mimickers: Regular versus singular behavior

    SciTech Connect

    Lemos, Jose P. S.; Zaslavskii, Oleg B.

    2008-07-15

    Black hole mimickers are possible alternatives to black holes; they would look observationally almost like black holes but would have no horizon. The properties in the near-horizon region where gravity is strong can be quite different for both types of objects, but at infinity it could be difficult to discern black holes from their mimickers. To disentangle this possible confusion, we examine the near-horizon properties, and their connection with far away asymptotic properties, of some candidates to black mimickers. We study spherically symmetric uncharged or charged but nonextremal objects, as well as spherically symmetric charged extremal objects. Within the uncharged or charged but nonextremal black hole mimickers, we study nonextremal {epsilon}-wormholes on the threshold of the formation of an event horizon, of which a subclass are called black foils, and gravastars. Within the charged extremal black hole mimickers we study extremal {epsilon}-wormholes on the threshold of the formation of an event horizon, quasi-black holes, and wormholes on the basis of quasi-black holes from Bonnor stars. We elucidate whether or not the objects belonging to these two classes remain regular in the near-horizon limit. The requirement of full regularity, i.e., finite curvature and absence of naked behavior, up to an arbitrary neighborhood of the gravitational radius of the object enables one to rule out potential mimickers in most of the cases. A list ranking the best black hole mimickers up to the worst, both nonextremal and extremal, is as follows: wormholes on the basis of extremal black holes or on the basis of quasi-black holes, quasi-black holes, wormholes on the basis of nonextremal black holes (black foils), and gravastars. Since in observational astrophysics it is difficult to find extremal configurations (the best mimickers in the ranking), whereas nonextremal configurations are really bad mimickers, the task of distinguishing black holes from their mimickers seems to

  6. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  7. A regularized multivariate regression approach for eQTL analysis

    PubMed Central

    Zhang, Hexin; Zhang, Yuzheng; Hsu, Li; Wang, Pei

    2013-01-01

    Expression quantitative trait loci (eQTLs) are genomic loci that regulate expression levels of mRNAs or proteins. Understanding these regulatory provides important clues to biological pathways that underlie diseases. In this paper, we propose a new statistical method, GroupRemMap, for identifying eQTLs. We model the relationship between gene expression and single nucleotide variants (SNVs) through multivariate linear regression models, in which gene expression levels are responses and SNV genotypes are predictors. To handle the high-dimensionality as well as to incorporate the intrinsic group structure of SNVs, we introduce a new regularization scheme to (1) control the overall sparsity of the model; (2) encourage the group selection of SNVs from the same gene; and (3) facilitate the detection of trans-hub-eQTLs. We apply the proposed method to the colorectal and breast cancer data sets from The Cancer Genome Atlas (TCGA), and identify several biologically interesting eQTLs. These findings may provide insight into biological processes associated with cancers and generate hypotheses for future studies. PMID:26085849

  8. Revealing hidden regularities with a general approach to fission

    NASA Astrophysics Data System (ADS)

    Schmidt, Karl-Heinz; Jurado, Beatriz

    2015-12-01

    Selected aspects of a general approach to nuclear fission are described with the focus on the possible benefit of meeting the increasing need of nuclear data for the existing and future emerging nuclear applications. The most prominent features of this approach are the evolution of quantum-mechanical wave functions in systems with complex shape, memory effects in the dynamics of stochastic processes, the influence of the Second Law of thermodynamics on the evolution of open systems in terms of statistical mechanics, and the topological properties of a continuous function in multi-dimensional space. It is demonstrated that this approach allows reproducing the measured fission barriers and the observed properties of the fission fragments and prompt neutrons. Our approach is based on sound physical concepts, as demonstrated by the fact that practically all the parameters have a physical meaning, and reveals a high degree of regularity in the fission observables. Therefore, we expect a good predictive power within the region extending from Po isotopes to Sg isotopes where the model parameters have been adjusted. Our approach can be extended to other regions provided that there is enough empirical information available that allows determining appropriate values of the model parameters. Possibilities for combining this general approach with microscopic models are suggested. These are supposed to enhance the predictive power of the general approach and to help improving or adjusting the microscopic models. This could be a way to overcome the present difficulties for producing evaluations with the required accuracy.

  9. 3+1 dimensional viscous hydrodynamics at high baryon densities

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu; Bleicher, M.; Huovinen, P.; Petersen, H.

    2015-05-01

    A 3+1 dimensional event-by-event viscous hydrodynamic + cascade model is applied for the simulation of heavy ion collision reactions at \\sqrt{sNN} = 6.3... 200 GeV. UrQMD cascade is used for the pre-thermal (pre-hydro) and final (post-hydro) stages of the reaction. The baryon, as well as electric charge densities are consistently taken into account in the model. For this aim the equation of state based on a Chiral model coupled to the Polyakov loop is used in hydrodynamic phase of evolution. As a result of the model adjustment to the experimental data, the effective values of the shear viscosity over entropy density η/s are obtained for different collision energies in the BES region. A decrease of the effective values of η/s from 0.2 to 0.08 is observed as collision energy increases from \\sqrt{s} ≈ 7 to 39 GeV.

  10. On the capacity of multihop slotted ALOHA networks with regular structure

    NASA Astrophysics Data System (ADS)

    Silvester, J. A.; Kleinrock, L.

    1983-08-01

    The capacity of networks with a regular structure operating under the slotted ALOHA access protocol is investigated. Circular (loop) and linear (bus) networks are first examined, followed by consideration of two-dimensional networks. For one-dimensional networks, it is found that the capacity is basically independent of the network average degree and is almost constant with respect to network size. For two-dimensional networks, it is determined that the capacity grows in proportion to the square root of the number of nodes in the network, provided that the average degree is kept small. In addition, it is found that reducing the average degree (with certain connectivity restrictions) permits a higher throughput to be achieved. Some of the peculiarities of routing in these networks are also studied.

  11. Regular treatment with formoterol versus regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Lasserson, Toby J

    2014-01-01

    Background An increase in serious adverse events with both regular formoterol and regular salmeterol in chronic asthma has been demonstrated in previous Cochrane reviews. Objectives We set out to compare the risks of mortality and non-fatal serious adverse events in trials which have randomised patients with chronic asthma to regular formoterol versus regular salmeterol. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked manufacturers’ websites of clinical trial registers for unpublished trial data and also checked Food and Drug Administration (FDA) submissions in relation to formoterol and salmeterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel-design clinical trials on patients of any age and with any severity of asthma if they randomised patients to treatment with regular formoterol versus regular salmeterol (without randomised inhaled corticosteroids), and were of at least 12 weeks’ duration. Data collection and analysis Two authors independently selected trials for inclusion in the review and extracted outcome data. We sought unpublished data on mortality and serious adverse events from the sponsors and authors. Main results The review included four studies (involving 1116 adults and 156 children). All studies were open label and recruited patients who were already taking inhaled corticosteroids for their asthma, and all studies contributed data on serious adverse events. All studies compared formoterol 12 μg versus salmeterol 50 μg twice daily. The adult studies were all comparing Foradil Aerolizer with Serevent Diskus, and the children’s study compared Oxis Turbohaler to Serevent Accuhaler. There was only one death in an adult (which was unrelated to asthma) and none in children, and there were no significant differences in non-fatal serious adverse events comparing formoterol to salmeterol in adults (Peto odds ratio (OR) 0.77; 95

  12. Generalized elastica on 2-dimensional de Sitter space S12

    NASA Astrophysics Data System (ADS)

    Huang, Rongpei; Yu, Junyan

    2016-02-01

    In this paper, the extremals of curvature energy actions on non-null regular curves in 2-dimensional de Sitter space are studied. We completely solve the Euler-Lagrange equation by quadratures. By using the Killing field, we construct three special coordinate systems and express the generalized elastica in 2-dimensional de Sitter space S12 by integral explicitly.

  13. Preparation of Regular Specimens for Atom Probes

    NASA Technical Reports Server (NTRS)

    Kuhlman, Kim; Wishard, James

    2003-01-01

    A method of preparation of specimens of non-electropolishable materials for analysis by atom probes is being developed as a superior alternative to a prior method. In comparison with the prior method, the present method involves less processing time. Also, whereas the prior method yields irregularly shaped and sized specimens, the present developmental method offers the potential to prepare specimens of regular shape and size. The prior method is called the method of sharp shards because it involves crushing the material of interest and selecting microscopic sharp shards of the material for use as specimens. Each selected shard is oriented with its sharp tip facing away from the tip of a stainless-steel pin and is glued to the tip of the pin by use of silver epoxy. Then the shard is milled by use of a focused ion beam (FIB) to make the shard very thin (relative to its length) and to make its tip sharp enough for atom-probe analysis. The method of sharp shards is extremely time-consuming because the selection of shards must be performed with the help of a microscope, the shards must be positioned on the pins by use of micromanipulators, and the irregularity of size and shape necessitates many hours of FIB milling to sharpen each shard. In the present method, a flat slab of the material of interest (e.g., a polished sample of rock or a coated semiconductor wafer) is mounted in the sample holder of a dicing saw of the type conventionally used to cut individual integrated circuits out of the wafers on which they are fabricated in batches. A saw blade appropriate to the material of interest is selected. The depth of cut and the distance between successive parallel cuts is made such that what is left after the cuts is a series of thin, parallel ridges on a solid base. Then the workpiece is rotated 90 and the pattern of cuts is repeated, leaving behind a square array of square posts on the solid base. The posts can be made regular, long, and thin, as required for samples

  14. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification. PMID:22997267

  15. Color correction optimization with hue regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  16. Determinants of Scanpath Regularity in Reading.

    PubMed

    von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan

    2015-09-01

    Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research. PMID:25530253

  17. Identifying Cognitive States Using Regularity Partitions

    PubMed Central

    2015-01-01

    Functional Magnetic Resonance (fMRI) data can be used to depict functional connectivity of the brain. Standard techniques have been developed to construct brain networks from this data; typically nodes are considered as voxels or sets of voxels with weighted edges between them representing measures of correlation. Identifying cognitive states based on fMRI data is connected with recording voxel activity over a certain time interval. Using this information, network and machine learning techniques can be applied to discriminate the cognitive states of the subjects by exploring different features of data. In this work we wish to describe and understand the organization of brain connectivity networks under cognitive tasks. In particular, we use a regularity partitioning algorithm that finds clusters of vertices such that they all behave with each other almost like random bipartite graphs. Based on the random approximation of the graph, we calculate a lower bound on the number of triangles as well as the expectation of the distribution of the edges in each subject and state. We investigate the results by comparing them to the state of the art algorithms for exploring connectivity and we argue that during epochs that the subject is exposed to stimulus, the inspected part of the brain is organized in an efficient way that enables enhanced functionality. PMID:26317983

  18. Compression and regularization with the information bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, Dj; Schwab, David

    Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer.

  19. Regularities in movement of subsurface condensated fluids

    SciTech Connect

    Ayre, A.G. )

    1990-05-01

    Darcy's law is traditionally considered to be a major filtration law. However, molecular and kinetic analyses of fluid movement in a porous medium with regard for physical interaction between liquids and rocks enabled the authors to derive a new, more general law: {anti V} = Ko (1{minus}Jo/J){sup 2} J, where: {anti v} = filtration rate, J = head gradient, Jo = initial filtration gradient of Ko = V/J with J Jo, i.e., Darcy's permeability coefficient. With J > Jo, this law is transformed into Darcy's law. With J > Jo, filtration stops as any multi-molecular liquid flow, and with J < Jo, it is transformed into an individual molecular movement called filling. Filling rate is determined using the law V = {lambda}J, where {lambda} is filling coefficient. The concept of initial filtration gradient gets a new interpretation. It is now considered as gradient with which pore-liquid movement is transformed from filtration type to a filling one. These regularities are important in evaluating subsurface fluid movement in the original environments or at some distance from exciting wells. In particular, it is found that pore-liquid flow in a natural environment is of filling type, and during this process separation of solution ingredients occurs. Final sizes of a depression cone of a functioning well or mine are controlled by existence of interactions between water and rock.

  20. Sparsity-Regularized HMAX for Visual Recognition

    PubMed Central

    Hu, Xiaolin; Zhang, Jianwei; Li, Jianmin; Zhang, Bo

    2014-01-01

    About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC) or independent component analysis (ICA), two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC) and medial temporal lobe (MTL). Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision. PMID:24392078

  1. Temporal Regularity of the Environment Drives Time Perception

    PubMed Central

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be varied. In one experiment, we asked whether the last stimulus was regular or not. We found that participants exposed to an irregular environment frequently reported perfectly regularly paced stimuli to be irregular. In a second experiment, we asked participants to judge whether the final stimulus was presented before or after a flash. In this way, we were able to determine distortions in temporal perception as changes in the timing necessary for the sound and the flash to be perceived synchronous. We found that within a regular context, the perceived timing of deviant last stimuli changed so that the relative anisochrony appeared to be perceptually decreased. In the irregular context, the perceived timing of irregular stimuli following a regular sequence was not affected. These observations suggest that humans use temporal expectations to evaluate the regularity of sequences and that expectations are combined with sensory stimuli to adapt perceived timing to follow the statistics of the environment. Expectations can be seen as a-priori probabilities on which perceived timing of stimuli depend. PMID:27441686

  2. Pauli-Villars regularization of field theories on the light front

    SciTech Connect

    Hiller, John R.

    2010-12-22

    Four-dimensional quantum field theories generally require regularization to be well defined. This can be done in various ways, but here we focus on Pauli-Villars (PV) regularization and apply it to nonperturbative calculations of bound states. The philosophy is to introduce enough PV fields to the Lagrangian to regulate the theory perturbatively, including preservation of symmetries, and assume that this is sufficient for the nonperturbative case. The numerical methods usually necessary for nonperturbative bound-state problems are then applied to a finite theory that has the original symmetries. The bound-state problem is formulated as a mass eigenvalue problem in terms of the light-front Hamiltonian. Applications to quantum electrodynamics are discussed.

  3. m-mode regularization scheme for the self-force in Kerr spacetime

    SciTech Connect

    Barack, Leor; Golbourn, Darren A.; Sago, Norichika

    2007-12-15

    We present a new, simple method for calculating the scalar, electromagnetic, and gravitational self-forces acting on particles in orbit around a Kerr black hole. The standard ''mode-sum regularization'' approach for self-force calculations relies on a decomposition of the full (retarded) perturbation field into multipole modes, followed by the application of a certain mode-by-mode regularization procedure. In recent years several groups have developed numerical codes for calculating black hole perturbations directly in 2+1 dimensions (i.e., decomposing the azimuthal dependence into m-modes, but refraining from a full multipole decomposition). Here we formulate a practical scheme for constructing the self-force directly from the 2+1-dimensional m-modes. While the standard mode-sum method is serving well in calculations of the self-force in Schwarzschild geometry, the new scheme should allow a more efficient treatment of the Kerr problem.

  4. TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY

    SciTech Connect

    Crotts, Arlin P. S.

    2009-05-20

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: {approx}50% of reports originate from near Aristarchus, {approx}16% from Plato, {approx}6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that {approx}80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  5. About the Regularized Navier Stokes Equations

    NASA Astrophysics Data System (ADS)

    Cannone, Marco; Karch, Grzegorz

    2005-03-01

    The first goal of this paper is to study the large time behavior of solutions to the Cauchy problem for the 3-dimensional incompressible Navier Stokes system. The Marcinkiewicz space L3,∞ is used to prove some asymptotic stability results for solutions with infinite energy. Next, this approach is applied to the analysis of two classical “regularized” Navier Stokes systems. The first one was introduced by J. Leray and consists in “mollifying” the nonlinearity. The second one was proposed by J.-L. Lions, who added the artificial hyper-viscosity (-Δ)ℓ/ 2, ℓ > 2 to the model. It is shown in the present paper that, in the whole space, solutions to those modified models converge as t → ∞ toward solutions of the original Navier Stokes system.

  6. Dimensional renormalization: Ladders and rainbows

    SciTech Connect

    Delbourgo, R.; Kalloniatis, A.C.; Thompson, G.

    1996-10-01

    Renormalization factors are most easily extracted by going to the massless limit of the quantum field theory and retaining only a single momentum scale. We derive the factors and renormalized Green{close_quote}s functions to {ital all} orders in perturbation theory for rainbow graphs and vertex (or scattering) diagrams at zero momentum transfer, in the context of dimensional regularization, and we prove that the correct anomalous dimensions for those processes emerge in the limit {ital D}{r_arrow}4. {copyright} {ital 1996 The American Physical Society.}

  7. A family of solutions of a higher order PVI equation near a regular singularity

    NASA Astrophysics Data System (ADS)

    Shimomura, Shun

    2006-09-01

    Restriction of the N-dimensional Garnier system to a complex line yields a system of second-order nonlinear differential equations, which may be regarded as a higher order version of the sixth Painlevé equation. Near a regular singularity of the system, we present a 2N-parameter family of solutions expanded into convergent series. These solutions are constructed by iteration, and their convergence is proved by using a kind of majorant series. For simplicity, we describe the proof in the case N = 2.

  8. Vector Schwinger Model with a Photon Mass Term with Faddeevian Regularization

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.

    2015-09-01

    In this talk, we consider the vector Schwinger model with a photon mass term with Faddeevian Regularization, describing two-dimensional (2D) electrodynamics with mass-less fermions and study its Hamiltonian and path integral quantization. This theory is seen to be gauge-non-invariant (GNI). We then construct a gauge-invariant (GI) theory corresponding to this GNI theory using the Stueckelberg mechanism and then recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-fixing conditions.

  9. 5 CFR 532.203 - Structure of regular wage schedules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Each nonsupervisory and leader regular wage schedule shall have 15 grades, which shall be designated as follows: (1) WG means an appropriated fund nonsupervisory grade; (2) WL means an appropriated fund leader... leader grade. (b) Each supervisory regular wage schedule shall have 19 grades, which shall be...

  10. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Regular working hours. 551.421 Section 551.421 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY... have a regularly scheduled administrative workweek. However, under title 5 United States Code, and...

  11. Endemic infections are always possible on regular networks

    NASA Astrophysics Data System (ADS)

    Del Genio, Charo I.; House, Thomas

    2013-10-01

    We study the dependence of the largest component in regular networks on the clustering coefficient, showing that its size changes smoothly without undergoing a phase transition. We explain this behavior via an analytical approach based on the network structure, and provide an exact equation describing the numerical results. Our work indicates that intrinsic structural properties always allow the spread of epidemics on regular networks.

  12. Pairing renormalization and regularization within the local density approximation

    SciTech Connect

    Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.

    2006-04-15

    We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications.

  13. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  14. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  15. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  16. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  17. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  18. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    ERIC Educational Resources Information Center

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  19. Myth 13: The Regular Classroom Teacher Can "Go It Alone"

    ERIC Educational Resources Information Center

    Sisk, Dorothy

    2009-01-01

    With most gifted students being educated in a mainstream model of education, the prevailing myth that the regular classroom teacher can "go it alone" and the companion myth that the teacher can provide for the education of gifted students through differentiation are alive and well. In reality, the regular classroom teacher is too often concerned…

  20. The Inclusion of Differently Abled Students in the Regular Classroom.

    ERIC Educational Resources Information Center

    Lewis, Angela

    This study sought to evaluate the implementation of a program to foster the inclusion of differently abled students into a regular elementary school classroom. The report is based on interviews with eight regular and two special education teachers, as well as the school principal, along with classroom materials and information on inclusion…

  1. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  2. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  3. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  4. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  5. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  6. 77 FR 76078 - Regular Board of Directors Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Sunshine Act Meeting TIME & DATE: 2:00 p.m., Wednesday, January 9, 2013.... Call to Order II. Executive Session III. Approval of the Regular Board of Directors Meeting Minutes...

  7. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...

  8. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...

  9. Analysis of regularized Navier-Stokes equations, 2

    NASA Technical Reports Server (NTRS)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  10. Fundamental and Regular Elementary Schools: Do Differences Exist?

    ERIC Educational Resources Information Center

    Weber, Larry J.; And Others

    This study compared the academic achievement and other outcomes of three public fundamental elementary schools with three regular elementary schools in a metropolitan school district. Modeled after the John Marshal Fundamental School in Pasadena, California, which opened in the fall of 1973, fundamental schools differ from regular schools in that…

  11. Inclusion Professional Development Model and Regular Middle School Educators

    ERIC Educational Resources Information Center

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  12. 29 CFR 553.233 - “Regular rate” defined.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... OF THE FAIR LABOR STANDARDS ACT TO EMPLOYEES OF STATE AND LOCAL GOVERNMENTS Fire Protection and Law Enforcement Employees of Public Agencies Overtime Compensation Rules § 553.233 “Regular rate” defined. The rules for computing an employee's “regular rate”, for purposes of the Act's overtime pay...

  13. Heavy pair production currents with general quantum numbers in dimensionally regularized nonrelativistic QCD

    SciTech Connect

    Hoang, Andre H.; Ruiz-Femenia, Pedro

    2006-12-01

    We discuss the form and construction of general color singlet heavy particle-antiparticle pair production currents for arbitrary quantum numbers, and issues related to evanescent spin operators and scheme dependences in nonrelativistic QCD in n=3-2{epsilon} dimensions. The anomalous dimensions of the leading interpolating currents for heavy quark and colored scalar pairs in arbitrary {sup 2S+1}L{sub J} angular-spin states are determined at next-to-leading order in the nonrelativistic power counting.

  14. Plate falling in a fluid: Regular and chaotic dynamics of finite-dimensional models

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Sergey P.

    2015-05-01

    Results are reviewed concerning the planar problem of a plate falling in a resisting medium studied with models based on ordinary differential equations for a small number of dynamical variables. A unified model is introduced to conduct a comparative analysis of the dynamical behaviors of models of Kozlov, Tanabe-Kaneko, Belmonte-Eisenberg-Moses and Andersen-Pesavento-Wang using common dimensionless variables and parameters. It is shown that the overall structure of the parameter spaces for the different models manifests certain similarities caused by the same inherent symmetry and by the universal nature of the phenomena involved in nonlinear dynamics (fixed points, limit cycles, attractors, and bifurcations).

  15. Regular expression order-sorted unification and matching

    PubMed Central

    Kutsia, Temur; Marin, Mircea

    2015-01-01

    We extend order-sorted unification by permitting regular expression sorts for variables and in the domains of function symbols. The obtained signature corresponds to a finite bottom-up unranked tree automaton. We prove that regular expression order-sorted (REOS) unification is of type infinitary and decidable. The unification problem presented by us generalizes some known problems, such as, e.g., order-sorted unification for ranked terms, sequence unification, and word unification with regular constraints. Decidability of REOS unification implies that sequence unification with regular hedge language constraints is decidable, generalizing the decidability result of word unification with regular constraints to terms. A sort weakening algorithm helps to construct a minimal complete set of REOS unifiers from the solutions of sequence unification problems. Moreover, we design a complete algorithm for REOS matching, and show that this problem is NP-complete and the corresponding counting problem is #P-complete. PMID:26523088

  16. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    SciTech Connect

    Bytsenko, A.A. ); Vanzo, L.; Zerbini, S. )

    1992-09-21

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M[sup p] [times] M[sub c][sup n], where M[sup p] is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M[sub c][sup n] = H[sup n]/[Gamma], the Selberg tracer formula associated with discrete torsion-free group [Gamma] of the n-dimensional Lobachevsky space H[sup n] is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed.

  17. On uniqueness of quasi-regular solutions to Protter problem for Keldish type equations

    NASA Astrophysics Data System (ADS)

    Hristov, T. D.; Popivanov, N. I.; Schneider, M.

    2013-12-01

    Some three-dimensional boundary value problems for mixed type equations of second kind are studied. Such type problems, but for mixed type equations of first kind are stated by M. Protter in the fifties. For hyperbolic-elliptic equations they are multidimensional analogue of the classical two-dimensional Morawetz-Guderley transonic problem. For hyperbolic and weakly hyperbolic equations the Protter problems are 3D analogues of Darboux or Cauchy-Goursat plane problems. In this case, in contrast of well-posedness of 2D problems, the new problems are strongly ill-posed. In this paper are given similar statement of Protter problems for equations of Keldish type, involving lower order terms. It is shown that the new problems are also ill-posed. A notion of quasi-regular solution is given and sufficient conditions for uniqueness of such solutions are found. The dependence of lower order terms is also studied.

  18. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS’s hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs’ spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets. PMID:25905027

  19. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms. PMID:24565791

  20. Group-sparsity regularization for ill-posed subsurface flow inverse problems

    NASA Astrophysics Data System (ADS)

    Golmohammadi, Azarang; Khaninezhad, Mohammad-Reza M.; Jafarpour, Behnam

    2015-10-01

    Sparse representations provide a flexible and parsimonious description of high-dimensional model parameters for reconstructing subsurface flow property distributions from limited data. To further constrain ill-posed inverse problems, group-sparsity regularization can take advantage of possible relations among the entries of unknown sparse parameters when: (i) groups of sparse elements are either collectively active or inactive and (ii) only a small subset of the groups is needed to approximate the parameters of interest. Since subsurface properties exhibit strong spatial connectivity patterns they may lead to sparse descriptions that satisfy the above conditions. When these conditions are established, a group-sparsity regularization can be invoked to facilitate the solution of the resulting inverse problem by promoting sparsity across the groups. The proposed regularization penalizes the number of groups that are active without promoting sparsity within each group. Two implementations are presented in this paper: one based on the multiresolution tree structure of Wavelet decomposition, without a need for explicit prior models, and another learned from explicit prior model realizations using sparse principal component analysis (SPCA). In each case, the approach first classifies the parameters of the inverse problem into groups with specific connectivity features, and then takes advantage of the grouped structure to recover the relevant patterns in the solution from the flow data. Several numerical experiments are presented to demonstrate the advantages of additional constraining power of group-sparsity in solving ill-posed subsurface model calibration problems.

  1. Two hybrid regularization frameworks for solving the electrocardiography inverse problem

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Liu, Feng; Crozier, Stuart

    2008-09-01

    In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.

  2. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms. PMID:26277005

  3. A local-order regularization for geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Gheymasi, H. Mohammadi; Gholami, A.

    2013-11-01

    Different types of regularization have been developed to obtain stable solutions to linear inverse problems. Among these, total variation (TV) is known as an edge preserver method, which leads to piecewise constant solutions and has received much attention for solving inverse problems arising in geophysical studies. However, the method shows staircase effects and is not suitable for the models including smooth regions. To overcome the staircase effect, we present a method, which employs a local-order difference operator in the regularization term. This method is performed in two steps: First, we apply a pre-processing step to find the edge locations in the regularized solution using a properly defined minmod limiter, where the edges are determined by a comparison of the solutions obtained using different order regularizations of the TV types. Then, we construct a local-order difference operator based on the information obtained from the pre-processing step about the edge locations, which is subsequently used as a regularization operator in the final sparsity-promoting regularization. Experimental results from the synthetic and real seismic traveltime tomography show that the proposed inversion method is able to retain the smooth regions of the regularized solution, while preserving sharp transitions presented in it.

  4. Structural source identification using a generalized Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Aucejo, M.

    2014-10-01

    This paper addresses the problem of identifying mechanical exciting forces from vibration measurements. The proposed approach is based on a generalized Tikhonov regularization that allows taking into account prior information on the measurement noise as well as on the main characteristics of sources to identify like its sparsity or regularity. To solve such a regularization problem efficiently, a Generalized Iteratively Reweighted Least-Squares (GIRLS) algorithm is introduced. Proposed numerical and experimental validations reveal the crucial role of prior information in the quality of the source identification and the performance of the GIRLS algorithm.

  5. Regularity criterion for the 3D Hall-magneto-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dai, Mimi

    2016-07-01

    This paper studies the regularity problem for the 3D incompressible resistive viscous Hall-magneto-hydrodynamic (Hall-MHD) system. The Kolmogorov 41 phenomenological theory of turbulence [14] predicts that there exists a critical wavenumber above which the high frequency part is dominated by the dissipation term in the fluid equation. Inspired by this idea, we apply an approach of splitting the wavenumber combined with an estimate of the energy flux to obtain a new regularity criterion. The regularity condition presented here is weaker than conditions in the existing criteria (Prodi-Serrin type criteria) for the 3D Hall-MHD system.

  6. Regular modes in a mixed-dynamics-based optical fiber.

    PubMed

    Michel, C; Allgaier, M; Doya, V

    2016-02-01

    A multimode optical fiber with a truncated transverse cross section acts as a powerful versatile support to investigate the wave features of complex ray dynamics. In this paper, we concentrate on the case of a geometry inducing mixed dynamics. We highlight that regular modes associated with stable periodic orbits present an enhanced spatial intensity localization. We report the statistics of the inverse participation ratio whose features are analogous to those of Anderson localized modes. Our study is supported by both numerical and experimental results on the spatial localization and spectral regularity of the regular modes. PMID:26986325

  7. Exploring the spectrum of regularized bosonic string theory

    SciTech Connect

    Ambjørn, J. Makeenko, Y.

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  8. Blind image deblurring with edge enhancing total variation regularization

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Hong, Hanyu; Song, Jie; Hua, Xia

    2015-04-01

    Blind image deblurring is an important issue. In this paper, we focus on solving this issue by constrained regularization method. Motivated by the importance of edges to visual perception, the edge-enhancing indicator is introduced to constrain the total variation regularization, and the bilateral filter is used for edge-preserving smoothing. The proposed edge enhancing regularization method aims to smooth preferably within each region and preserve edges. Experiments on simulated and real motion blurred images show that the proposed method is competitive with recent state-of-the-art total variation methods.

  9. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  10. Reconstruction of 3D ultrasound images based on Cyclic Regularized Savitzky-Golay filters.

    PubMed

    Toonkum, Pollakrit; Suwanwela, Nijasri C; Chinrungrueng, Chedsada

    2011-02-01

    This paper presents a new three-dimensional (3D) ultrasound reconstruction algorithm for generation of 3D images from a series of two-dimensional (2D) B-scans acquired in the mechanical linear scanning framework. Unlike most existing 3D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the Cyclic Regularized Savitzky-Golay (CRSG) filter, is a new variant of the Savitzky-Golay (SG) smoothing filter. The CRSG filter has been improved upon the original SG filter in two respects: First, the cyclic indicator function has been incorporated into the least square cost function to enable the CRSG filter to approximate nonuniformly spaced data of the unobserved image intensities contained in unfilled voxels and reduce speckle noise of the observed image intensities contained in filled voxels. Second, the regularization function has been augmented to the least squares cost function as a mechanism to balance between the degree of speckle reduction and the degree of detail preservation. The CRSG filter has been evaluated and compared with the Voxel Nearest-Neighbor (VNN) interpolation post-processed by the Adaptive Speckle Reduction (ASR) filter, the VNN interpolation post-processed by the Adaptive Weighted Median (AWM) filter, the Distance-Weighted (DW) interpolation, and the Adaptive Distance-Weighted (ADW) interpolation, on reconstructing a synthetic 3D spherical image and a clinical 3D carotid artery bifurcation in the mechanical linear scanning framework. This preliminary evaluation indicates that the CRSG filter is more effective in both speckle reduction and geometric reconstruction of 3D ultrasound images than the other methods. PMID:20696448

  11. Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Habermehl, Christina; Steinbrink, Jens; Müller, Klaus-Robert; Haufe, Stefan

    2014-09-01

    Functional near-infrared spectroscopy (fNIRS) is an optical method for noninvasively determining brain activation by estimating changes in the absorption of near-infrared light. Diffuse optical tomography (DOT) extends fNIRS by applying overlapping "high density" measurements, and thus providing a three-dimensional imaging with an improved spatial resolution. Reconstructing brain activation images with DOT requires solving an underdetermined inverse problem with far more unknowns in the volume than in the surface measurements. All methods of solving this type of inverse problem rely on regularization and the choice of corresponding regularization or convergence criteria. While several regularization methods are available, it is unclear how well suited they are for cerebral functional DOT in a semi-infinite geometry. Furthermore, the regularization parameter is often chosen without an independent evaluation, and it may be tempting to choose the solution that matches a hypothesis and rejects the other. In this simulation study, we start out by demonstrating how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods. To independently select the regularization parameter, we propose a cross-validation procedure which achieves a reconstruction quality close to the optimum. Additionally, we compare the outcome of seven different image reconstruction methods for cerebral functional DOT. The methods selected include reconstruction procedures that are already widely used for cerebral DOT [minimum ℓ2-norm estimate (ℓ2MNE) and truncated singular value decomposition], recently proposed sparse reconstruction algorithms [minimum ℓ1- and a smooth minimum ℓ0-norm estimate (ℓ1MNE, ℓ0MNE, respectively)] and a depth- and noise-weighted minimum norm (wMNE). Furthermore, we expand the range of algorithms for DOT by adapting two EEG-source localization algorithms [sparse basis field expansions and linearly

  12. Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography.

    PubMed

    Habermehl, Christina; Steinbrink, Jens; Müller, Klaus-Robert; Haufe, Stefan

    2014-09-01

    Functional near-infrared spectroscopy (fNIRS) is an optical method for noninvasively determining brain activation by estimating changes in the absorption of near-infrared light. Diffuse optical tomography (DOT) extends fNIRS by applying overlapping “high density” measurements, and thus providing a three-dimensional imaging with an improved spatial resolution. Reconstructing brain activation images with DOT requires solving an underdetermined inverse problem with far more unknowns in the volume than in the surface measurements. All methods of solving this type of inverse problem rely on regularization and the choice of corresponding regularization or convergence criteria. While several regularization methods are available, it is unclear how well suited they are for cerebral functional DOT in a semi-infinite geometry. Furthermore, the regularization parameter is often chosen without an independent evaluation, and it may be tempting to choose the solution that matches a hypothesis and rejects the other. In this simulation study, we start out by demonstrating how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods. To independently select the regularization parameter, we propose a cross-validation procedure which achieves a reconstruction quality close to the optimum. Additionally, we compare the outcome of seven different image reconstruction methods for cerebral functional DOT. The methods selected include reconstruction procedures that are already widely used for cerebral DOT [minimum l2-norm estimate (l2MNE) and truncated singular value decomposition], recently proposed sparse reconstruction algorithms [minimum l1- and a smooth minimum l0-norm estimate (l1MNE, l0MNE, respectively)] and a depth- and noise-weighted minimum norm (wMNE). Furthermore, we expand the range of algorithms for DOT by adapting two EEG-source localization algorithms [sparse basis field expansions and linearly

  13. Loop Invariants, Exploration of Regularities, and Mathematical Games.

    ERIC Educational Resources Information Center

    Ginat, David

    2001-01-01

    Presents an approach for illustrating, on an intuitive level, the significance of loop invariants for algorithm design and analysis. The illustration is based on mathematical games that require the exploration of regularities via problem-solving heuristics. (Author/MM)

  14. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  15. Vectorial total variation-based regularization for variational image registration.

    PubMed

    Chumchob, Noppadol

    2013-11-01

    To use interdependence between the primary components of the deformation field for smooth and non-smooth registration problems, the channel-by-channel total variation- or standard vectorial total variation (SVTV)-based regularization has been extended to a more flexible and efficient technique, allowing high quality regularization procedures. Based on this method, this paper proposes a fast nonlinear multigrid (NMG) method for solving the underlying Euler-Lagrange system of two coupled second-order nonlinear partial differential equations. Numerical experiments using both synthetic and realistic images not only confirm that the recommended VTV-based regularization yields better registration qualities for a wide range of applications than those of the SVTV-based regularization, but also that the proposed NMG method is fast, accurate, and reliable in delivering visually-pleasing registration results. PMID:23893729

  16. Regular Doctor Visits Can Help Spot Colon Cancer

    MedlinePlus

    ... 159699.html Regular Doctor Visits Can Help Spot Colon Cancer Early detection improves likelihood of survival, researchers ... increases the odds you'll be screened for colon cancer, a new study says. Colon cancer is ...

  17. Generic quantum walks with memory on regular graphs

    NASA Astrophysics Data System (ADS)

    Li, Dan; Mc Gettrick, Michael; Gao, Fei; Xu, Jie; Wen, Qiao-Yan

    2016-04-01

    Quantum walks with memory (QWM) are a type of modified quantum walks that record the walker's latest path. As we know, only two kinds of QWM have been presented up to now. It is desired to design more QWM for research, so that we can explore the potential of QWM. In this work, by presenting the one-to-one correspondence between QWM on a regular graph and quantum walks without memory (QWoM) on a line digraph of the regular graph, we construct a generic model of QWM on regular graphs. This construction gives a general scheme for building all possible standard QWM on regular graphs and makes it possible to study properties of different kinds of QWM. Here, by taking the simplest example, which is QWM with one memory on the line, we analyze some properties of QWM, such as variance, occupancy rate, and localization.

  18. Analytic regularization for landmark-based image registration

    NASA Astrophysics Data System (ADS)

    Shusharina, Nadezhda; Sharp, Gregory

    2012-03-01

    Landmark-based registration using radial basis functions (RBF) is an efficient and mathematically transparent method for the registration of medical images. To ensure invertibility and diffeomorphism of the RBF-based vector field, various regularization schemes have been suggested. Here, we report a novel analytic method of RBF regularization and demonstrate its power for Gaussian RBF. Our analytic formula can be used to obtain a regularized vector field from the solution of a system of linear equations, exactly as in traditional RBF, and can be generalized to any RBF with infinite support. We statistically validate the method on global registration of synthetic and pulmonary images. Furthermore, we present several clinical examples of multistage intensity/landmark-based registrations, where regularized Gaussian RBF are successful in correcting locally misregistered areas resulting from automatic B-spline registration. The intended ultimate application of our method is rapid, interactive local correction of deformable registration with a small number of mouse clicks.

  19. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  20. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  1. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  2. 5 CFR 550.1307 - Authority to regularize paychecks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... caused by work scheduling cycles that result in varying hours in the firefighters' tours of duty from pay... for regular tours of duty over the firefighter's entire work scheduling cycle must, to the...

  3. Are Pupils in Special Education Too "Special" for Regular Education?

    NASA Astrophysics Data System (ADS)

    Pijl, Ysbrand J.; Pijl, Sip J.

    1998-01-01

    In the Netherlands special needs pupils are often referred to separate schools for the Educable Mentally Retarded (EMR) or the Learning Disabled (LD). There is an ongoing debate on how to reduce the growing numbers of special education placements. One of the main issues in this debate concerns the size of the difference in cognitive abilities between pupils in regular education and those eligible for LD or EMR education. In this study meta-analysis techniques were used to synthesize the findings from 31 studies on differences between pupils in regular primary education and those in special education in the Netherlands. Studies were grouped into three categories according to the type of measurements used: achievement, general intelligence and neuropsychological tests. It was found that pupils in regular education and those in special education differ in achievement and general intelligence. Pupils in schools for the educable mentally retarded in particular perform at a much lower level than is common in regular Dutch primary education.

  4. Robust destriping method with unidirectional total variation and framelet regularization.

    PubMed

    Chang, Yi; Fang, Houzhang; Yan, Luxin; Liu, Hai

    2013-10-01

    Multidetector imaging systems often suffer from the problem of stripe noise and random noise, which greatly degrade the imaging quality. In this paper, we propose a variational destriping method that combines unidirectional total variation and framelet regularization. Total-variation-based regularizations are considered effective in removing different kinds of stripe noise, and framelet regularization can efficiently preserve the detail information. In essence, these two regularizations are complementary to each other. Moreover, the proposed method can also efficiently suppress random noise. The split Bregman iteration method is employed to solve the resulting minimization problem. Comparative results demonstrate that the proposed method significantly outperforms state-of-the-art destriping methods on both qualitative and quantitative assessments. PMID:24104244

  5. Aggregation of regularized solutions from multiple observation models

    NASA Astrophysics Data System (ADS)

    Chen, Jieyang; Pereverzyev, Sergiy, Jr.; Xu, Yuesheng

    2015-07-01

    Joint inversion of multiple observation models has important applications in many disciplines including geoscience, image processing and computational biology. One of the methodologies for joint inversion of ill-posed observation equations naturally leads to multi-parameter regularization, which has been intensively studied over the last several years. However, problems such as the choice of multiple regularization parameters remain unsolved. In the present study, we discuss a rather general approach to the regularization of multiple observation models, based on the idea of the linear aggregation of approximations corresponding to different values of the regularization parameters. We show how the well-known linear functional strategy can be used for such an aggregation and prove that the error of a constructive aggregator differs from the ideal error value by a quantity of an order higher than the best guaranteed accuracy from the most trustable observation model. The theoretical analysis is illustrated by numerical experiments with simulated data.

  6. A novel regularized edge-preserving super-resolution algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng

    2013-09-01

    Using super-resolution (SR) technology is a good approach to obtain high-resolution infrared image. However, Image super-resolution reconstruction is essentially an ill-posed problem, it is important to design an effective regularization term (image prior). Gaussian prior is widely used in the regularization term, but the reconstructed SR image becomes over-smoothness. Here, a novel regularization term called non-local means (NLM) term is derived based on the assumption that the natural image content is likely to repeat itself within some neighborhood. In the proposed framework, the estimated high image is obtained by minimizing a cost function. The iteration method is applied to solve the optimum problem. With the progress of iteration, the regularization term is adaptively updated. The proposed algorithm has been tested in several experiments. The experimental results show that the proposed approach is robust and can reconstruct higher quality images both in quantitative term and perceptual effect.

  7. Dimensional Duality

    SciTech Connect

    Green, Daniel; Lawrence, Albion; McGreevy, John; Morrison, David R.; Silverstein, Eva; /SLAC /Stanford U., Phys. Dept.

    2007-05-18

    We show that string theory on a compact negatively curved manifold, preserving a U(1)b1 winding symmetry, grows at least b1 new effective dimensions as the space shrinks. The winding currents yield a ''D-dual'' description of a Riemann surface of genus h in terms of its 2h dimensional Jacobian torus, perturbed by a closed string tachyon arising as a potential energy term in the worldsheet sigma model. D-branes on such negatively curved manifolds also reveal this structure, with a classical moduli space consisting of a b{sub 1}-torus. In particular, we present an AdS/CFT system which offers a non-perturbative formulation of such supercritical backgrounds. Finally, we discuss generalizations of this new string duality.

  8. Regular structure in the inner Cassini Division of Saturn's rings

    NASA Technical Reports Server (NTRS)

    Flynn, Brian C.; Cuzzi, Jeffrey N.

    1989-01-01

    Voyager imaging, radio occultation, and stellar occultation data for the regular structure of Saturn's inner Cassini Division are presently analyzed. The regular optical depth variation observed by the radio occultation experiment scan and the feature noted in Voyager images is the same structure, namely the gravitational wakes of two 10-km radius satellites orbiting within the division. The structure is azimuthally symmetric, and is judged to rule out the possibility that large moonlets may be responsible for the observed structure.

  9. Regularization methods for Nuclear Lattice Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Klein, Nico; Lee, Dean; Liu, Weitao; Meißner, Ulf-G.

    2015-07-01

    We investigate Nuclear Lattice Effective Field Theory for the two-body system for several lattice spacings at lowest order in the pionless as well as in the pionful theory. We discuss issues of regularizations and predictions for the effective range expansion. In the pionless case, a simple Gaussian smearing allows to demonstrate lattice spacing independence over a wide range of lattice spacings. We show that regularization methods known from the continuum formulation are necessary as well as feasible for the pionful approach.

  10. Note on regular black holes in a brane world

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2015-10-01

    In this work, we show that regular black holes in a Randall-Sundrum-type brane world model are generated by the nonlocal bulk influence, expressed by a constant parameter in the brane metric, only in the spherical case. In the axial case (black holes with rotation), this influence forbids them. A nonconstant bulk influence is necessary to generate regular black holes with rotation in this context.

  11. Lesions impairing regular versus irregular past tense production☆

    PubMed Central

    Meteyard, Lotte; Price, Cathy J.; Woollams, Anna M.; Aydelott, Jennifer

    2013-01-01

    We investigated selective impairments in the production of regular and irregular past tense by examining language performance and lesion sites in a sample of twelve stroke patients. A disadvantage in regular past tense production was observed in six patients when phonological complexity was greater for regular than irregular verbs, and in three patients when phonological complexity was closely matched across regularity. These deficits were not consistently related to grammatical difficulties or phonological errors but were consistently related to lesion site. All six patients with a regular past tense disadvantage had damage to the left ventral pars opercularis (in the inferior frontal cortex), an area associated with articulatory sequencing in prior functional imaging studies. In addition, those that maintained a disadvantage for regular verbs when phonological complexity was controlled had damage to the left ventral supramarginal gyrus (in the inferior parietal lobe), an area associated with phonological short-term memory. When these frontal and parietal regions were spared in patients who had damage to subcortical (n = 2) or posterior temporo-parietal regions (n = 3), past tense production was relatively unimpaired for both regular and irregular forms. The remaining (12th) patient was impaired in producing regular past tense but was significantly less accurate when producing irregular past tense. This patient had frontal, parietal, subcortical and posterior temporo-parietal damage, but was distinguished from the other patients by damage to the left anterior temporal cortex, an area associated with semantic processing. We consider how our lesion site and behavioral observations have implications for theoretical accounts of past tense production. PMID:24273726

  12. New solutions of charged regular black holes and their stability

    NASA Astrophysics Data System (ADS)

    Uchikata, Nami; Yoshida, Shijun; Futamase, Toshifumi

    2012-10-01

    We construct new regular black hole solutions by matching the de Sitter solution and the Reissner-Nordström solution with a timelike thin shell. The thin shell is assumed to have mass but no pressure and obeys an equation of motion derived from Israel’s junction conditions. By investigating the equation of motion for the shell, we obtain stationary solutions of charged regular black holes and examine stability of the solutions. Stationary solutions are found in limited ranges of 0.87L≤m≤1.99L, and they are stable against small radial displacement of the shell with fixed values of m, M, and Q if M>0, where L is the de Sitter horizon radius, m the black hole mass, M the proper mass of the shell, and Q the black hole charge. All the solutions obtained are highly charged in the sense of Q/m>23≈0.866. By taking the massless limit of the shell in the present regular black hole solutions, we obtain the charged regular black hole with a massless shell obtained by Lemos and Zanchin and investigate stability of the solutions. It is found that Lemos and Zanchin’s regular black hole solutions given by the massless limit of the present regular black hole solutions permit stable solutions, which are obtained by the limit of M→0.

  13. The relationship between lifestyle regularity and subjective sleep quality

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  14. Nonlocal means-based regularizations for statistical CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Ma, Jianhua; Liu, Yan; Han, Hao; Li, Lihong; Wang, Jing; Liang, Zhengrong

    2014-03-01

    Statistical iterative reconstruction (SIR) methods have shown remarkable gains over the conventional filtered backprojection (FBP) method in improving image quality for low-dose computed tomography (CT). They reconstruct the CT images by maximizing/minimizing a cost function in a statistical sense, where the cost function usually consists of two terms: the data-fidelity term modeling the statistics of measured data, and the regularization term reflecting a prior information. The regularization term in SIR plays a critical role for successful image reconstruction, and an established family of regularizations is based on the Markov random field (MRF) model. Inspired by the success of nonlocal means (NLM) algorithm in image processing applications, we proposed, in this work, a family of generic and edgepreserving NLM-based regularizations for SIR. We evaluated one of them where the potential function takes the quadratic-form. Experimental results with both digital and physical phantoms clearly demonstrated that SIR with the proposed regularization can achieve more significant gains than SIR with the widely-used Gaussian MRF regularization and the conventional FBP method, in terms of image noise reduction and resolution preservation.

  15. Phase transitions in a 3 dimensional lattice loop gas

    NASA Astrophysics Data System (ADS)

    MacKenzie, Richard; Nebia-Rahal, F.; Paranjape, M. B.

    2010-06-01

    We investigate, via Monte Carlo simulations, the phase structure of a system of closed, nonintersecting but otherwise noninteracting, loops in 3 Euclidean dimensions. The loops correspond to closed trajectories of massive particles and we find a phase transition as a function of their mass. We identify the order parameter as the average length of the loops at equilibrium. This order parameter exhibits a sharp increase as the mass is decreased through a critical value, the behavior seems to be a crossover transition. We believe that the model represents an effective description of the broken-symmetry sector of the 2+1 dimensional Abelian Higgs model, in the extreme strong coupling limit. The massive gauge bosons and the neutral scalars are decoupled, and the relevant low-lying excitations correspond to vortices and antivortices. The functional integral can be approximated by a sum over simple, closed vortex loop configurations. We present a novel fashion to generate nonintersecting closed loops, starting from a tetrahedral tessellation of three space. The two phases that we find admit the following interpretation: the usual Higgs phase and a novel phase which is heralded by the appearance of effectively infinitely long loops. We compute the expectation value of the Wilson loop operator and that of the Polyakov loop operator. The Wilson loop exhibits perimeter law behavior in both phases implying that the transition corresponds neither to the restoration of symmetry nor to confinement. The effective interaction between external charges is screened in both phases, however there is a dramatic increase in the polarization cloud in the novel phase as shown by the energy shift introduced by the Wilson loop.

  16. Regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular salmeterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and FDA submissions in relation to salmeterol. The date of the most recent search was August 2011. Selection criteria We included controlled parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular salmeterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 26 trials comparing salmeterol to placebo and eight trials comparing with salbutamol. These included 62,815 participants with asthma (including 2,599 children). In six trials (2,766 patients), no serious adverse event data could be obtained. All-cause mortality was higher with regular salmeterol than placebo but the increase was not significant (Peto odds ratio (OR) 1.33 (95% CI 0.85 to 2.08)). Non-fatal serious adverse events were significantly increased when regular salmeterol was compared with placebo (OR 1.15 95% CI 1.02 to 1.29). One extra serious adverse event occurred over 28 weeks for every 188 people

  17. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    SciTech Connect

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  18. Early family regularity protects against later disruptive behavior.

    PubMed

    Rijlaarsdam, Jolien; Tiemeier, Henning; Ringoot, Ank P; Ivanova, Masha Y; Jaddoe, Vincent W V; Verhulst, Frank C; Roza, Sabine J

    2016-07-01

    Infants' temperamental anger or frustration reactions are highly stable, but are also influenced by maturation and experience. It is yet unclear why some infants high in anger or frustration reactions develop disruptive behavior problems whereas others do not. We examined family regularity, conceptualized as the consistency of mealtime and bedtime routines, as a protective factor against the development of oppositional and aggressive behavior. This study used prospectively collected data from 3136 families participating in the Generation R Study. Infant anger or frustration reactions and family regularity were reported by mothers when children were ages 6 months and 2-4 years, respectively. Multiple informants (parents, teachers, and children) and methods (questionnaire and interview) were used in the assessment of children's oppositional and aggressive behavior at age 6. Higher levels of family regularity were associated with lower levels of child aggression independent of temperamental anger or frustration reactions (β = -0.05, p = 0.003). The association between child oppositional behavior and temperamental anger or frustration reactions was moderated by family regularity and child gender (β = 0.11, p = 0.046): family regularity reduced the risk for oppositional behavior among those boys who showed anger or frustration reactions in infancy. In conclusion, family regularity reduced the risk for child aggression and showed a gender-specific protective effect against child oppositional behavior associated with anger or frustration reactions. Families that ensured regularity of mealtime and bedtime routines buffered their infant sons high in anger or frustration reactions from developing oppositional behavior. PMID:26589300

  19. Particle motion and Penrose processes around rotating regular black hole

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.

  20. Two-Dimensional Systolic Array For Kalman-Filter Computing

    NASA Technical Reports Server (NTRS)

    Chang, Jaw John; Yeh, Hen-Geul

    1988-01-01

    Two-dimensional, systolic-array, parallel data processor performs Kalman filtering in real time. Algorithm rearranged to be Faddeev algorithm for generalized signal processing. Algorithm mapped onto very-large-scale integrated-circuit (VLSI) chip in two-dimensional, regular, simple, expandable array of concurrent processing cells. Processor does matrix/vector-based algebraic computations. Applications include adaptive control of robots, remote manipulators and flexible structures and processing radar signals to track targets.

  1. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  2. Regular treatment with formoterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular formoterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and Food and Drug Administration (FDA) submissions in relation to formoterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular formoterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second author checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 22 studies (8032 participants) comparing regular formoterol to placebo and salbutamol. Non-fatal serious adverse event data could be obtained for all participants from published studies comparing formoterol and placebo but only 80% of those comparing formoterol with salbutamol or terbutaline. Three deaths occurred on regular formoterol and none on placebo; this difference was not statistically significant. It was not possible to assess disease-specific mortality in view of the small number of deaths. Non-fatal serious adverse events were significantly increased when

  3. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  4. Comparing within-subject classification and regularization methods in fMRI for large and small sample sizes.

    PubMed

    Churchill, Nathan W; Yourganov, Grigori; Strother, Stephen C

    2014-09-01

    In recent years, a variety of multivariate classifier models have been applied to fMRI, with different modeling assumptions. When classifying high-dimensional fMRI data, we must also regularize to improve model stability, and the interactions between classifier and regularization techniques are still being investigated. Classifiers are usually compared on large, multisubject fMRI datasets. However, it is unclear how classifier/regularizer models perform for within-subject analyses, as a function of signal strength and sample size. We compare four standard classifiers: Linear and Quadratic Discriminants, Logistic Regression and Support Vector Machines. Classification was performed on data in the linear kernel (covariance) feature space, and classifiers are tuned with four commonly-used regularizers: Principal Component and Independent Component Analysis, and penalization of kernel features using L₁ and L₂ norms. We evaluated prediction accuracy (P) and spatial reproducibility (R) of all classifier/regularizer combinations on single-subject analyses, over a range of three different block task contrasts and sample sizes for a BOLD fMRI experiment. We show that the classifier model has a small impact on signal detection, compared to the choice of regularizer. PCA maximizes reproducibility and global SNR, whereas Lp -norms tend to maximize prediction. ICA produces low reproducibility, and prediction accuracy is classifier-dependent. However, trade-offs in (P,R) depend partly on the optimization criterion, and PCA-based models are able to explore the widest range of (P,R) values. These trends are consistent across task contrasts and data sizes (training samples range from 6 to 96 scans). In addition, the trends in classifier performance are consistent for ROI-based classifier analyses. PMID:24639383

  5. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  6. Quasi-regular solutions to a class of 3D degenerating hyperbolic equations

    NASA Astrophysics Data System (ADS)

    Hristov, T. D.; Popivanov, N. I.; Schneider, M.

    2012-11-01

    In the fifties M. Protter stated new three-dimensional (3D) boundary value problems (BVP) for mixed type equations of first kind. For hyperbolic-elliptic equations they are multidimensional analogue of the classical two-dimensional (2D) Morawetz-Guderley transonic problem. Up to now, in this case, not a single example of nontrivial solution to the new problem, neither a general existence result is known. The difficulties appear even for BVP in the hyperbolic part of the domain, that were formulated by Protter for weakly hyperbolic equations. In that case the Protter problems are 3D analogues of the plane Darboux or Cauchy-Goursat problems. It is interesting that in contrast to the planar problems the new 3D problems are strongly ill-posed. Some of the Protter problems for degenerating hyperbolic equation without lower order terms or even for the usual wave equation have infinite-dimensional kernels. Therefore there are infinitely many orthogonality conditions for classical solvability of their adjiont problems. So it is interesting to obtain results for uniqueness of solutions adding first order terms in the equation. In the present paper we do this and find conditions for coefficients under which we prove uniqueness of quasi-regular solutions to the Protter problems.

  7. On Nonperiodic Euler Flows with Hölder Regularity

    NASA Astrophysics Data System (ADS)

    Isett, Philip; Oh, Sung-Jin

    2016-08-01

    In (Isett, Regularity in time along the coarse scale flow for the Euler equations, 2013), the first author proposed a strengthening of Onsager's conjecture on the failure of energy conservation for incompressible Euler flows with Hölder regularity not exceeding {1/3}. This stronger form of the conjecture implies that anomalous dissipation will fail for a generic Euler flow with regularity below the Onsager critical space {L_t^∞ B_{3,∞}^{1/3}} due to low regularity of the energy profile. This paper is the first and main paper in a series of two, the results of which may be viewed as first steps towards establishing the conjectured failure of energy regularity for generic solutions with Hölder exponent less than {1/5}. The main result of the present paper shows that any given smooth Euler flow can be perturbed in {C^{1/5-ɛ}_{t,x}} on any pre-compact subset of R× R^3 to violate energy conservation. Furthermore, the perturbed solution is no smoother than {C^{1/5-ɛ}_{t,x}}. As a corollary of this theorem, we show the existence of nonzero {C^{1/5-ɛ}_{t,x}} solutions to Euler with compact space-time support, generalizing previous work of the first author (Isett, Hölder continuous Euler flows in three dimensions with compact support in time, 2012) to the nonperiodic setting.

  8. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  9. Incorporating anatomical side information into PET reconstruction using nonlocal regularization.

    PubMed

    Nguyen, Van-Giang; Lee, Soo-Jin

    2013-10-01

    With the introduction of combined positron emission tomography (PET)/computed tomography (CT) or PET/magnetic resonance imaging (MRI) scanners, there is an increasing emphasis on reconstructing PET images with the aid of the anatomical side information obtained from X-ray CT or MRI scanners. In this paper, we propose a new approach to incorporating prior anatomical information into PET reconstruction using the nonlocal regularization method. The nonlocal regularizer developed for this application is designed to selectively consider the anatomical information only when it is reliable. As our proposed nonlocal regularization method does not directly use anatomical edges or boundaries which are often used in conventional methods, it is not only free from additional processes to extract anatomical boundaries or segmented regions, but also more robust to the signal mismatch problem that is caused by the indirect relationship between the PET image and the anatomical image. We perform simulations with digital phantoms. According to our experimental results, compared to the conventional method based on the traditional local regularization method, our nonlocal regularization method performs well even with the imperfect prior anatomical information or in the presence of signal mismatch between the PET image and the anatomical image. PMID:23744678

  10. In vivo impedance imaging with total variation regularization.

    PubMed

    Borsic, Andrea; Graham, Brad M; Adler, Andy; Lionheart, William R B

    2010-01-01

    We show that electrical impedance tomography (EIT) image reconstruction algorithms with regularization based on the total variation (TV) functional are suitable for in vivo imaging of physiological data. This reconstruction approach helps to preserve discontinuities in reconstructed profiles, such as step changes in electrical properties at interorgan boundaries, which are typically smoothed by traditional reconstruction algorithms. The use of the TV functional for regularization leads to the minimization of a nondifferentiable objective function in the inverse formulation. This cannot be efficiently solved with traditional optimization techniques such as the Newton method. We explore two implementations methods for regularization with the TV functional: the lagged diffusivity method and the primal dual-interior point method (PD-IPM). First we clarify the implementation details of these algorithms for EIT reconstruction. Next, we analyze the performance of these algorithms on noisy simulated data. Finally, we show reconstructed EIT images of in vivo data for ventilation and gastric emptying studies. In comparison to traditional quadratic regularization, TV regularization shows improved ability to reconstruct sharp contrasts. PMID:20051330

  11. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  12. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically. PMID:26735744

  13. Fast multislice fluorescence molecular tomography using sparsity-inducing regularization

    NASA Astrophysics Data System (ADS)

    Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba

    2016-02-01

    Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.

  14. Regularized total least squares approach for nonconvolutional linear inverse problems.

    PubMed

    Zhu, W; Wang, Y; Galatsanos, N P; Zhang, J

    1999-01-01

    In this correspondence, a solution is developed for the regularized total least squares (RTLS) estimate in linear inverse problems where the linear operator is nonconvolutional. Our approach is based on a Rayleigh quotient (RQ) formulation of the TLS problem, and we accomplish regularization by modifying the RQ function to enforce a smooth solution. A conjugate gradient algorithm is used to minimize the modified RQ function. As an example, the proposed approach has been applied to the perturbation equation encountered in optical tomography. Simulation results show that this method provides more stable and accurate solutions than the regularized least squares and a previously reported total least squares approach, also based on the RQ formulation. PMID:18267442

  15. Regularity based descriptor computed from local image oscillations.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo; Legrand, Pierrick; Lutton, Evelyne

    2007-05-14

    This work presents a novel local image descriptor based on the concept of pointwise signal regularity. Local image regions are extracted using either an interest point or an interest region detector, and discriminative feature vectors are constructed by uniformly sampling the pointwise Hölderian regularity around each region center. Regularity estimation is performed using local image oscillations, the most straightforward method directly derived from the definition of the Hölder exponent. Furthermore, estimating the Hölder exponent in this manner has proven to be superior, in most cases, when compared to wavelet based estimation as was shown in previous work. Our detector shows invariance to illumination change, JPEG compression, image rotation and scale change. Results show that the proposed descriptor is stable with respect to variations in imaging conditions, and reliable performance metrics prove it to be comparable and in some instances better than SIFT, the state-of-the-art in local descriptors. PMID:19546918

  16. Selecting protein families for environmental features based on manifold regularization.

    PubMed

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data. PMID:24802701

  17. Breast ultrasound tomography with total-variation regularization

    SciTech Connect

    Huang, Lianjie; Li, Cuiping; Duric, Neb

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  18. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data. PMID:18255433

  19. Analysis of the "Learning in Regular Classrooms" movement in China.

    PubMed

    Deng, M; Manset, G

    2000-04-01

    The Learning in Regular Classrooms experiment has evolved in response to China's efforts to educate its large population of students with disabilities who, until the mid-1980s, were denied a free education. In the Learning in Regular Classrooms, students with disabilities (primarily sensory impairments or mild mental retardation) are educated in neighborhood schools in mainstream classrooms. Despite difficulties associated with developing effective inclusive programming, this approach has contributed to a major increase in the enrollment of students with disabilities and increased involvement of schools, teachers, and parents in China's newly developing special education system. Here we describe the development of the Learning in Regular Classroom approach and the challenges associated with educating students with disabilities in China. PMID:10804702

  20. Hybrid regularization image restoration algorithm based on total variation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongmin; Wang, Yan

    2013-09-01

    To reduce the noise amplification and ripple phenomenon in the restoration result by using the traditional Richardson-Lucy deconvolution method, a novel hybrid regularization image restoration algorithm based on total variation is proposed in this paper. The key ides is that the hybrid regularization terms are employed according to the characteristics of different regions in the image itself. At the same time, the threshold between the different regularization terms is selected according to the golden section point which takes into account the human eye's visual feeling. Experimental results show that the restoration results of the proposed method are better than that of the total variation Richardson-Lucy algorithm both in PSNR and MSE, and it has the better visual effect simultaneously.

  1. Structural characterization of the packings of granular regular polygons.

    PubMed

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons. PMID:26764678

  2. Manufacture of Regularly Shaped Sol-Gel Pellets

    NASA Technical Reports Server (NTRS)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  3. Quantum backflow states from eigenstates of the regularized current operator

    NASA Astrophysics Data System (ADS)

    Halliwell, J. J.; Gillman, E.; Lennon, O.; Patel, M.; Ramirez, I.

    2013-11-01

    We present an exhaustive class of states with quantum backflow—the phenomenon in which a state consisting entirely of positive momenta has negative current and the probability flows in the opposite direction to the momentum. They are characterized by a general function of momenta subject to very weak conditions. Such a family of states is of interest in the light of a recent experimental proposal to measure backflow. We find one particularly simple state which has surprisingly large backflow—about 41% of the lower bound on flux derived by Bracken and Melloy. We study the eigenstates of a regularized current operator and we show how some of these states, in a certain limit, lead to our class of backflow states. This limit also clarifies the correspondence between the spectrum of the regularized current operator, which has just two non-zero eigenvalues in our chosen regularization, and the usual current operator.

  4. The ARM Best Estimate 2-dimensional Gridded Surface

    SciTech Connect

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  5. Local conservative regularizations of compressible magnetohydrodynamic and neutral flows

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2016-02-01

    Ideal systems like magnetohydrodynamics (MHD) and Euler flow may develop singularities in vorticity ( w =∇×v ). Viscosity and resistivity provide dissipative regularizations of the singularities. In this paper, we propose a minimal, local, conservative, nonlinear, dispersive regularization of compressible flow and ideal MHD, in analogy with the KdV regularization of the 1D kinematic wave equation. This work extends and significantly generalizes earlier work on incompressible Euler and ideal MHD. It involves a micro-scale cutoff length λ which is a function of density, unlike in the incompressible case. In MHD, it can be taken to be of order the electron collisionless skin depth c/ωpe. Our regularization preserves the symmetries of the original systems and, with appropriate boundary conditions, leads to associated conservation laws. Energy and enstrophy are subject to a priori bounds determined by initial data in contrast to the unregularized systems. A Hamiltonian and Poisson bracket formulation is developed and applied to generalize the constitutive relation to bound higher moments of vorticity. A "swirl" velocity field is identified, and shown to transport w/ρ and B/ρ, generalizing the Kelvin-Helmholtz and Alfvén theorems. The steady regularized equations are used to model a rotating vortex, MHD pinch, and a plane vortex sheet. The proposed regularization could facilitate numerical simulations of fluid/MHD equations and provide a consistent statistical mechanics of vortices/current filaments in 3D, without blowup of enstrophy. Implications for detailed analyses of fluid and plasma dynamic systems arising from our work are briefly discussed.

  6. Processing SPARQL queries with regular expressions in RDF databases

    PubMed Central

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  7. Regular heartbeat dynamics are associated with cardiac health.

    PubMed

    Cysarz, Dirk; Lange, Silke; Matthiessen, Peter F; Leeuwen, Peter van

    2007-01-01

    The human heartbeat series is more variable and, hence, more complex in healthy subjects than in congestive heart failure (CHF) patients. However, little is known about the complexity of the heart rate variations on a beat-to-beat basis. We present an analysis based on symbolic dynamics that focuses on the dynamic features of such beat-to-beat variations on a small time scale. The sequence of acceleration and deceleration of eight successive heartbeats is represented by a binary sequence consisting of ones and zeros. The regularity of such binary patterns is quantified using approximate entropy (ApEn). Holter electrocardiograms from 30 healthy subjects, 15 patients with CHF, and their surrogate data were analyzed with respect to the regularity of such binary sequences. The results are compared with spectral analysis and ApEn of heart rate variability. Counterintuitively, healthy subjects show a large amount of regular beat-to-beat patterns in addition to a considerable amount of irregular patterns. CHF patients show a predominance of one regular beat-to-beat pattern (alternation of acceleration and deceleration), as well as some irregular patterns similar to the patterns observed in the surrogate data. In healthy subjects, regular beat-to-beat patterns reflect the physiological adaptation to different activities, i.e., sympathetic modulation, whereas irregular patterns may arise from parasympathetic modulation. The patterns observed in CHF patients indicate a largely reduced influence of the autonomic nervous system. In conclusion, analysis of short beat-to-beat patterns with respect to regularity leads to a considerable increase of information compared with spectral analysis or ApEn of heart-rate variations. PMID:16973939

  8. Regularization of languages by adults and children: A mathematical framework.

    PubMed

    Rische, Jacquelyn L; Komarova, Natalia L

    2016-02-01

    The fascinating ability of humans to modify the linguistic input and "create" a language has been widely discussed. In the work of Newport and colleagues, it has been demonstrated that both children and adults have some ability to process inconsistent linguistic input and "improve" it by making it more consistent. In Hudson Kam and Newport (2009), artificial miniature language acquisition from an inconsistent source was studied. It was shown that (i) children are better at language regularization than adults and that (ii) adults can also regularize, depending on the structure of the input. In this paper we create a learning algorithm of the reinforcement-learning type, which exhibits patterns reported in Hudson Kam and Newport (2009) and suggests a way to explain them. It turns out that in order to capture the differences between children's and adults' learning patterns, we need to introduce a certain asymmetry in the learning algorithm. Namely, we have to assume that the reaction of the learners differs depending on whether or not the source's input coincides with the learner's internal hypothesis. We interpret this result in the context of a different reaction of children and adults to implicit, expectation-based evidence, positive or negative. We propose that a possible mechanism that contributes to the children's ability to regularize an inconsistent input is related to their heightened sensitivity to positive evidence rather than the (implicit) negative evidence. In our model, regularization comes naturally as a consequence of a stronger reaction of the children to evidence supporting their preferred hypothesis. In adults, their ability to adequately process implicit negative evidence prevents them from regularizing the inconsistent input, resulting in a weaker degree of regularization. PMID:26580218

  9. Zigzag stacks and m-regular linear stacks.

    PubMed

    Chen, William Y C; Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2014-12-01

    The contact map of a protein fold is a graph that represents the patterns of contacts in the fold. It is known that the contact map can be decomposed into stacks and queues. RNA secondary structures are special stacks in which the degree of each vertex is at most one and each arc has length of at least two. Waterman and Smith derived a formula for the number of RNA secondary structures of length n with exactly k arcs. Höner zu Siederdissen et al. developed a folding algorithm for extended RNA secondary structures in which each vertex has maximum degree two. An equation for the generating function of extended RNA secondary structures was obtained by Müller and Nebel by using a context-free grammar approach, which leads to an asymptotic formula. In this article, we consider m-regular linear stacks, where each arc has length at least m and the degree of each vertex is bounded by two. Extended RNA secondary structures are exactly 2-regular linear stacks. For any m ≥ 2, we obtain an equation for the generating function of the m-regular linear stacks. For given m, we deduce a recurrence relation and an asymptotic formula for the number of m-regular linear stacks on n vertices. To establish the equation, we use the reduction operation of Chen, Deng, and Du to transform an m-regular linear stack to an m-reduced zigzag (or alternating) stack. Then we find an equation for m-reduced zigzag stacks leading to an equation for m-regular linear stacks. PMID:25455155

  10. McGehee regularization of general SO(3)-invariant potentials and applications to stationary and spherically symmetric spacetimes

    NASA Astrophysics Data System (ADS)

    Galindo, Pablo; Mars, Marc

    2014-12-01

    The McGehee regularization is a method to study the singularity at the origin of the dynamical system describing a point particle in a plane moving under the action of a power-law potential. It was used by Belbruno and Pretorius (2011 Class. Quantum Grav. 28 195007) to perform a dynamical system regularization of the singularity at the center of the motion of massless test particles in the Schwarzschild spacetime. In this paper, we generalize the McGehee transformation so that we can regularize the singularity at the origin of the dynamical system describing the motion of causal geodesics (timelike or null) in any stationary and spherically symmetric spacetime of Kerr-Schild form. We first show that the geodesics for both massive and massless particles can be described globally in the Kerr-Schild spacetime as the motion of a Newtonian point particle in a suitable radial potential and study the conditions under which the central singularity can be regularized using an extension of the McGehee method. As an example, we apply these results to causal geodesics in the Schwarzschild and Reissner-Nordström spacetimes. Interestingly, the geodesic trajectories in the whole maximal extension of both spacetimes can be described by a single two-dimensional phase space with non-trivial topology. This topology arises from the presence of excluded regions in the phase space determined by the condition that the tangent vector of the geodesic be causal and future directed.

  11. Charged scalar perturbations around a regular magnetic black hole

    NASA Astrophysics Data System (ADS)

    Huang, Yang; Liu, Dao-Jun

    2016-05-01

    We study charged scalar perturbations in the background of a regular magnetic black hole. In this case, the charged scalar perturbation does not result in superradiance. By using a careful time-domain analysis, we show that the charge of the scalar field can change the real part of the quasinormal frequency, but has little impact on the imaginary part of the quasinormal frequency and the behavior of the late-time tail. Therefore, the regular magnetic black hole may be stable under the perturbations of a charged scalar field at the linear level.

  12. Mixing of regular and chaotic orbits in beams

    SciTech Connect

    Courtlandt L. Bohn et al.

    2002-09-04

    Phase mixing of chaotic orbits exponentially distributes the orbits through their accessible phase space. This phenomenon, commonly called ''chaotic mixing'', stands in marked contrast to phase mixing of regular orbits which proceeds as a power law in time. It is inherently irreversible; hence, its associated e-folding time scale sets a condition on any process envisioned for emittance compensation. We numerically investigate phase mixing in the presence of space charge, distinguish between the evolution of regular and chaotic orbits, and discuss how phase mixing potentially influences macroscopic properties of high-brightness beams.

  13. On Vertex Covering Transversal Domination Number of Regular Graphs

    PubMed Central

    Vasanthi, R.; Subramanian, K.

    2016-01-01

    A simple graph G = (V, E) is said to be r-regular if each vertex of G is of degree r. The vertex covering transversal domination number γvct(G) is the minimum cardinality among all vertex covering transversal dominating sets of G. In this paper, we analyse this parameter on different kinds of regular graphs especially for Qn and H3,n. Also we provide an upper bound for γvct of a connected cubic graph of order n ≥ 8. Then we try to provide a more stronger relationship between γ and γvct. PMID:27119089

  14. The cardiovascular effects of regular and decaffeinated coffee.

    PubMed Central

    Smits, P; Thien, T; Van 't Laar, A

    1985-01-01

    In a single-blind study the effects of drinking two cups of regular or decaffeinated coffee on blood pressure, heart rate, forearm blood flow and plasma concentrations of caffeine, renin and catecholamines were studied in 12 normotensive subjects. Drinking regular coffee led to a rise of blood pressure, a fall of heart rate and an increase of plasma catecholamines. Decaffeinated coffee induced a smaller increase of diastolic blood pressure without changing other parameters. This study shows that the cardiovascular effects of drinking coffee are mainly the result of its caffeine content. PMID:4027129

  15. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    PubMed

    Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  16. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  17. Regularized iterative weighted filtered backprojection for helical cone-beam CT.

    PubMed

    Sunnegårdh, Johan; Danielsson, Per-Erik

    2008-09-01

    Contemporary reconstruction methods employed for clinical helical cone-beam computed tomography (CT) are analytical (noniterative) but mathematically nonexact, i.e., the reconstructed image contains so called cone-beam artifacts, especially for higher cone angles. Besides cone artifacts, these methods also suffer from windmill artifacts: alternating dark and bright regions creating spiral-like patterns occurring in the vicinity of high z-direction derivatives. In this article, the authors examine the possibility to suppress cone and windmill artifacts by means of iterative application of nonexact three-dimensional filtered backprojection, where the analytical part of the reconstruction brings about accelerated convergence. Specifically, they base their investigations on the weighted filtered backprojection method [Stierstorfer et al., Phys. Med. Biol. 49, 2209-2218 (2004)]. Enhancement of high frequencies and amplification of noise is a common but unwanted side effect in many acceleration attempts. They have employed linear regularization to avoid these effects and to improve the convergence properties of the iterative scheme. Artifacts and noise, as well as spatial resolution in terms of modulation transfer functions and slice sensitivity profiles have been measured. The results show that for cone angles up to +/-2.78 degrees, cone artifacts are suppressed and windmill artifacts are alleviated within three iterations. Furthermore, regularization parameters controlling spatial resolution can be tuned so that image quality in terms of spatial resolution and noise is preserved. Simulations with higher number of iterations and long objects (exceeding the measured region) verify that the size of the reconstructible region is not reduced, and that the regularization greatly improves the convergence properties of the iterative scheme. Taking these results into account, and the possibilities to extend the proposed method with more accurate modeling of the acquisition

  18. Self-equilibrium and stability of regular truncated tetrahedral tensegrity structures

    NASA Astrophysics Data System (ADS)

    Zhang, J. Y.; Ohsaki, M.

    2012-10-01

    This paper presents analytical conditions of self-equilibrium and super-stability for the regular truncated tetrahedral tensegrity structures, nodes of which have one-to-one correspondence to the tetrahedral group. These conditions are presented in terms of force densities, by investigating the block-diagonalized force density matrix. The block-diagonalized force density matrix, with independent sub-matrices lying on its leading diagonal, is derived by making use of the tetrahedral symmetry via group representation theory. The condition for self-equilibrium is found by enforcing the force density matrix to have the necessary number of nullities, which is four for three-dimensional structures. The condition for super-stability is further presented by guaranteeing positive semi-definiteness of the force density matrix.

  19. Regular network model for the sea ice-albedo feedback in the Arctic.

    PubMed

    Müller-Stoffels, Marc; Wackerbauer, Renate

    2011-03-01

    The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice. PMID:21456825

  20. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    NASA Astrophysics Data System (ADS)

    Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan

    2015-05-01

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.

  1. Spatial-temporal total variation regularization (STTVR) for 4D-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Haibo; Maier, Andreas; Fahrig, Rebecca; Hornegger, Joachim

    2012-03-01

    Four dimensional computed tomography (4D-CT) is very important for treatment planning in thorax or abdomen area, e.g. for guiding radiation therapy planning. The respiratory motion makes the reconstruction problem illposed. Recently, compressed sensing theory was introduced. It uses sparsity as a prior to solve the problem and improves image quality considerably. However, the images at each phase are reconstructed individually. The correlations between neighboring phases are not considered in the reconstruction process. In this paper, we propose the spatial-temporal total variation regularization (STTVR) method which not only employs the sparsity in the spatial domain but also in the temporal domain. The algorithm is validated with XCAT thorax phantom. The Euclidean norm of the reconstructed image and ground truth is calculated for evaluation. The results indicate that our method improves the reconstruction quality by more than 50% compared to standard ART.

  2. REGULARIZED 3D FUNCTIONAL REGRESSION FOR BRAIN IMAGE DATA VIA HAAR WAVELETS

    PubMed Central

    Wang, Xuejing; Nan, Bin; Zhu, Ji; Koeppe, Robert

    2015-01-01

    The primary motivation and application in this article come from brain imaging studies on cognitive impairment in elderly subjects with brain disorders. We propose a regularized Haar wavelet-based approach for the analysis of three-dimensional brain image data in the framework of functional data analysis, which automatically takes into account the spatial information among neighboring voxels. We conduct extensive simulation studies to evaluate the prediction performance of the proposed approach and its ability to identify related regions to the outcome of interest, with the underlying assumption that only few relatively small subregions are truly predictive of the outcome of interest. We then apply the proposed approach to searching for brain subregions that are associated with cognition using PET images of patients with Alzheimer’s disease, patients with mild cognitive impairment, and normal controls. PMID:26082826

  3. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    SciTech Connect

    Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan

    2015-05-15

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.

  4. Diffraction of a shock wave by a compression corner; regular and single Mach reflection

    NASA Technical Reports Server (NTRS)

    Vijayashankar, V. S.; Kutler, P.; Anderson, D.

    1976-01-01

    The two dimensional, time dependent Euler equations which govern the flow field resulting from the injection of a planar shock with a compression corner are solved with initial conditions that result in either regular reflection or single Mach reflection of the incident planar shock. The Euler equations which are hyperbolic are transformed to include the self similarity of the problem. A normalization procedure is employed to align the reflected shock and the Mach stem as computational boundaries to implement the shock fitting procedure. A special floating fitting scheme is developed in conjunction with the method of characteristics to fit the slip surface. The reflected shock, the Mach stem, and the slip surface are all treated as harp discontinuities, thus, resulting in a more accurate description of the inviscid flow field. The resulting numerical solutions are compared with available experimental data and existing first-order, shock-capturing numerical solutions.

  5. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators.

    PubMed

    Wolfrum, Matthias; Omel'chenko, Oleh E; Sieber, Jan

    2015-05-01

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium. PMID:26026325

  6. Primary Feynman rules to calculate the ɛ-dimensional integrand of any 1-loop amplitude

    NASA Astrophysics Data System (ADS)

    Pittau, R.

    2012-02-01

    When using dimensional regularization/reduction the ɛ-dimensional numerator of the 1-loop Feynman diagrams gives rise to rational contributions. I list the set of fundamental rules that allow the extraction of such terms at the integrand level in any theory containing scalars, vectors and fermions, such as the electroweak standard model, QCD and SUSY.

  7. Rigidity percolation by next-nearest-neighbor bonds on generic and regular isostatic lattices.

    PubMed

    Zhang, Leyou; Rocklin, D Zeb; Chen, Bryan Gin-ge; Mao, Xiaoming

    2015-03-01

    We study rigidity percolation transitions in two-dimensional central-force isostatic lattices, including the square and the kagome lattices, as next-nearest-neighbor bonds ("braces") are randomly added to the system. In particular, we focus on the differences between regular lattices, which are perfectly periodic, and generic lattices with the same topology of bonds but whose sites are at random positions in space. We find that the regular square and kagome lattices exhibit a rigidity percolation transition when the number of braces is ∼LlnL, where L is the linear size of the lattice. This transition exhibits features of both first-order and second-order transitions: The whole lattice becomes rigid at the transition, and a diverging length scale also exists. In contrast, we find that the rigidity percolation transition in the generic lattices occur when the number of braces is very close to the number obtained from Maxwell's law for floppy modes, which is ∼L. The transition in generic lattices is a very sharp first-order-like transition, at which the addition of one brace connects all small rigid regions in the bulk of the lattice, leaving only floppy modes on the edge. We characterize these transitions using numerical simulations and develop analytic theories capturing each transition. Our results relate to other interesting problems, including jamming and bootstrap percolation. PMID:25871071

  8. Inversion of velocity map ion images using iterative regularization and cross validation

    NASA Astrophysics Data System (ADS)

    Renth, F.; Riedel, J.; Temps, F.

    2006-03-01

    Two methods for improved inversion of velocity map images are presented. Both schemes use two-dimensional basis functions to perform the iteratively regularized inversion of the imaging equation in matrix form. The quality of the reconstructions is improved by taking into account the constraints that are derived from prior knowledge about the experimental data, such as non-negativity and noise statistics, using (i) the projected Landweber [Am. J. Math. 73, 615 (1951)] and (ii) the Richardson-Lucy [J. Opt. Soc. Am. 62, 55 (1972); Astron. J. 79, 745 (1974)] algorithms. It is shown that the optimum iteration count, which plays the role of a regularization parameter, can be determined by partitioning the image into quarters or halves and a subsequent cross validation of the inversion results. The methods are tested with various synthetic velocity map images and with velocity map images of the H-atom fragments produced in the photodissociation of HBr at λ =243.1nm using a (2+1) resonantly enhanced multiphoton ionization (REMPI) detection scheme. The versatility of the method, which is only determined by the choice of basis functions, is exploited to take into account the photoelectron recoil that leads to a splitting and broadening of the velocity distribution in the two product channels, and to successfully reconstruct the deconvolved velocity distribution. The methods can also be applied to the cases where higher order terms in the Legendre expansion of the angular distribution are present.

  9. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  10. Hamiltonian, Path Integral and BRST Formulations of the Vector Schwinger Model with a Photon Mass Term with Faddeevian Regularization

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.

    2016-01-01

    Recently (in a series of papers) we have studied the vector Schwinger model with a photon mass term describing one-space one-time dimensional electrodynamics with mass-less fermions in the so-called standard regularization. In the present work, we study this model in the Faddeevian regularization (FR). This theory in the FR is seen to be gauge-non-invariant (GNI). We study the Hamiltonian and path integral quantization of this GNI theory. We then construct a gauge-invariant (GI) theory corresponding to this GNI theory using the Stueckelberg mechanism and recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-choice. Further, we study the Hamiltonian, path integral and Becchi-Rouet-Stora and Tyutin formulations of the newly constructed GI theory under appropriate gauge-fixing conditions.

  11. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  12. 5 CFR 550.1307 - Authority to regularize paychecks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PAY ADMINISTRATION (GENERAL) Firefighter Pay § 550.1307 Authority to regularize paychecks. Upon a... an agency's plan to reduce or eliminate variation in the amount of firefighters' biweekly paychecks caused by work scheduling cycles that result in varying hours in the firefighters' tours of duty from...

  13. Rotating bearings in regular and irregular granular shear packings

    NASA Astrophysics Data System (ADS)

    Ström, J. A. Ã.

    2008-01-01

    For 2D regular dense packings of solid mono-size non-sliding disks there is a mechanism for bearing formation under shear that can be explained theoretically. There is, however, no easy way to extend this model to include random dense packings which would better describe natural packings. A numerical model that simulates shear deformation for both near-regular and irregular packings is used to demonstrate that rotating bearings appear roughly with the same density in random and regular packings. The main difference appears in the size distribution of the rotating clusters near the jamming threshold. The size distribution is well described by a scaling form with a large-size cut-off that seems to grow without bounds for regular packings at the jamming threshold, while it remains finite for irregular packings. At packing densities above the jamming transition there can be no shear, unless the disks are allowed to break. Breaking of disks induces a large number of small local bearings. Clusters of rotating particles may contribute to e.g. pre-rupture yielding in landslides, snow avalanches and to the formation of aseismic gaps in tectonic fault zones.

  14. Adult Regularization of Inconsistent Input Depends on Pragmatic Factors

    ERIC Educational Resources Information Center

    Perfors, Amy

    2016-01-01

    In a variety of domains, adults who are given input that is only partially consistent do not discard the inconsistent portion (regularize) but rather maintain the probability of consistent and inconsistent portions in their behavior (probability match). This research investigates the possibility that adults probability match, at least in part,…

  15. Advance Organizer Strategy for Educable Mentally Retarded and Regular Children.

    ERIC Educational Resources Information Center

    Chang, Moon K.

    The study examined the effects of an advance organizer on the learning and retention of facts and concepts obtained from a sound film by educable mentally retarded (N=30) and regular children (N=30) in a mainstreamed secondary public school class. Also examined was the interaction between the advance organizer and ability levels of the Ss. Results…

  16. Relativistic regular approximations revisited: An infinite-order relativistic approximation

    SciTech Connect

    Dyall, K.G.; van Lenthe, E.

    1999-07-01

    The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}

  17. Multiple Learning Strategies Project. Medical Assistant. [Regular Vocational. Vol. 3.

    ERIC Educational Resources Information Center

    Varney, Beverly; And Others

    This instructional package, one of four designed for regular vocational students, focuses on the vocational area of medical assistant. Contained in this document are forty learning modules organized into four units: office surgery; telephoning; bandaging; and medications and treatments. Each module includes these elements: a performance objective…

  18. 77 FR 15142 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    .... Executive Session III. Approval of the Regular Board of Directors Meeting Minutes IV. Approval of the Audit Committee Meeting Minutes V. Approval of the Finance, Budget and Program Committee Meeting Minutes VI. Approval of the Corporate Administration Committee Meeting Minutes VII. Approval of FY 2011 Audit...

  19. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable...

  20. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  1. 75 FR 13598 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 10 a.m., Monday, March 22, 2010. PLACE: 1325 G Street, NW., Suite 800 Boardroom, Washington, DC 20005. STATUS: Open. CONTACT PERSON...

  2. 78 FR 36794 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 9:30 a.m., Tuesday, June 25, 2013. PLACE: 999 North Capitol St NE., Suite 900, Gramlich Boardroom, Washington, DC 20002. STATUS:...

  3. Learning With l1 -Regularizer Based on Markov Resampling.

    PubMed

    Gong, Tieliang; Zou, Bin; Xu, Zongben

    2016-05-01

    Learning with l1 -regularizer has brought about a great deal of research in learning theory community. Previous known results for the learning with l1 -regularizer are based on the assumption that samples are independent and identically distributed (i.i.d.), and the best obtained learning rate for the l1 -regularization type algorithms is O(1/√m) , where m is the samples size. This paper goes beyond the classic i.i.d. framework and investigates the generalization performance of least square regression with l1 -regularizer ( l1 -LSR) based on uniformly ergodic Markov chain (u.e.M.c) samples. On the theoretical side, we prove that the learning rate of l1 -LSR for u.e.M.c samples l1 -LSR(M) is with the order of O(1/m) , which is faster than O(1/√m) for the i.i.d. counterpart. On the practical side, we propose an algorithm based on resampling scheme to generate u.e.M.c samples. We show that the proposed l1 -LSR(M) improves on the l1 -LSR(i.i.d.) in generalization error at the low cost of u.e.M.c resampling. PMID:26011874

  4. A Response to the Regular Education/Special Education Initiative.

    ERIC Educational Resources Information Center

    McCarthy, Jeanne McCrae

    1987-01-01

    The position paper of the Division for Learning Disabilities of the Council for Exceptional Children proposes seven components (including the differentiation of learning disabilities from learning problems) of the final policy of the Office of Special Education and Rehabilitative Services concerning the Regular Education/Special Education…

  5. Preverbal Infants Infer Intentional Agents from the Perception of Regularity

    ERIC Educational Resources Information Center

    Ma, Lili; Xu, Fei

    2013-01-01

    Human adults have a strong bias to invoke intentional agents in their intuitive explanations of ordered wholes or regular compositions in the world. Less is known about the ontogenetic origin of this bias. In 4 experiments, we found that 9-to 10-month-old infants expected a human hand, but not a mechanical tool with similar affordances, to be the…

  6. New Technologies in Portugal: Regular Middle and High School

    ERIC Educational Resources Information Center

    Florentino, Teresa; Sanchez, Lucas; Joyanes, Luis

    2010-01-01

    Purpose: The purpose of this paper is to elaborate upon the relation between information and communication technologies (ICT), particularly web-based resources, and their use, programs and learning in Portuguese middle and high regular public schools. Design/methodology/approach: Adding collected documentation on curriculum, laws and other related…

  7. Regularization in Short-Term Memory for Serial Order

    ERIC Educational Resources Information Center

    Botvinick, Matthew; Bylsma, Lauren M.

    2005-01-01

    Previous research has shown that short-term memory for serial order can be influenced by background knowledge concerning regularities of sequential structure. Specifically, it has been shown that recall is superior for sequences that fit well with familiar sequencing constraints. The authors report a corresponding effect pertaining to serial…

  8. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  9. 12 CFR 311.5 - Regular procedure for closing meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5 Regular... a meeting will be taken only when a majority of the entire Board votes to take such action. In deciding whether to close a meeting or portion of a meeting, the Board will consider whether the...

  10. Regular Class Participation System (RCPS). A Final Report.

    ERIC Educational Resources Information Center

    Ferguson, Dianne L.; And Others

    The Regular Class Participation System (RCPS) project attempted to develop, implement, and validate a system for placing and maintaining students with severe disabilities in general education classrooms, with a particular emphasis on achieving both social and learning outcomes for students. A teacher-based planning strategy was developed and…

  11. The Regular Educator's Role in the Individual Education Plan Process.

    ERIC Educational Resources Information Center

    Weishaar, Mary Konya

    2001-01-01

    Presents a case that demonstrates why general educators must be knowledgeable about and involved in the individual education plan for a student with a disability. Describes new regulations in individual education plan processes. Concludes that the overall intent of the changes is to bring special educators and regular educators together for the…

  12. Cost Effectiveness of Premium Versus Regular Gasoline in MCPS Buses.

    ERIC Educational Resources Information Center

    Baacke, Clifford M.; Frankel, Steven M.

    The primary question posed in this study is whether premium or regular gasoline is more cost effective for the Montgomery County Public School (MCPS) bus fleet, as a whole, when miles-per-gallon, cost-per-gallon, and repair costs associated with mileage are considered. On average, both miles-per-gallon, and repair costs-per-mile favor premium…

  13. New vision based navigation clue for a regular colonoscope's tip

    NASA Astrophysics Data System (ADS)

    Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy

    2009-02-01

    Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.

  14. Effect of regular and decaffeinated coffee on serum gastrin levels.

    PubMed

    Acquaviva, F; DeFrancesco, A; Andriulli, A; Piantino, P; Arrigoni, A; Massarenti, P; Balzola, F

    1986-04-01

    We evaluated the hypothesis that the noncaffeine gastric acid stimulant effect of coffee might be by way of serum gastrin release. After 10 healthy volunteers drank 50 ml of coffee solution corresponding to one cup of home-made regular coffee containing 10 g of sugar and 240 mg/100 ml of caffeine, serum total gastrin levels peaked at 10 min and returned to basal values within 30 min; the response was of little significance (1.24 times the median basal value). Drinking 100 ml of sugared water (as control) resulted in occasional random elevations of serum gastrin which were not statistically significant. Drinking 100 ml of regular or decaffeinated coffee resulted in a prompt and lasting elevation of total gastrin; mean integrated outputs after regular or decaffeinated coffee were, respectively, 2.3 and 1.7 times the values in the control test. Regular and decaffeinated coffees share a strong gastrin-releasing property. Neither distension, osmolarity, calcium, nor amino acid content of the coffee solution can account for this property, which should be ascribed to some other unidentified ingredient. This property is at least partially lost during the process of caffeine removal. PMID:3745848

  15. Integration of Dependent Handicapped Classes into the Regular School.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Guidelines are provided for integrating the dependent handicapped student (DHS) into the regular school in Alberta, Canada. A short overview comprises the introduction. Identified are two types of integration: (1) incidental contact and (2) planned contact for social, recreational, and educational activities with other students. Noted are types of…

  16. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, G.C.

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  17. Regular rotating de Sitter–Kerr black holes and solitons

    NASA Astrophysics Data System (ADS)

    Dymnikova, Irina; Galaktionov, Evgeny

    2016-07-01

    We study the basic generic properties of the class of regular rotating solutions asymptotically Kerr for a distant observer, obtained with using the Gürses–Gürsey algorithm from regular spherically symmetric solutions specified by {T}tt={T}rr which belong to the Kerr–Schild metrics. All regular solutions obtained with the Newman–Janis complex translation from the known spherical solutions, belong to this class. Spherical solutions with {T}tt={T}rr satisfying the weak energy condition (WEC), have obligatory de Sitter center. Rotation transforms the de Sitter center into the interior de Sitter vacuum disk. Regular de Sitter–Kerr solutions have at most two horizons and two ergospheres, and two different kinds of interiors. In the case when an original spherical solution satisfies the dominant energy condition, there can exist the interior de Sitter vacuum { S }-surface which contains the de Sitter disk as a bridge. The WEC is violated in the internal cavities between the { S }-surface and the disk, which are filled thus with a phantom fluid. In the case when a related spherical solution violates the dominant energy condition, vacuum interior of a rotating solution reduces to the de Sitter disk only.

  18. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable...

  19. Factors Contributing to Regular Smoking in Adolescents in Turkey

    ERIC Educational Resources Information Center

    Can, Gamze; Topbas, Murat; Oztuna, Funda; Ozgun, Sukru; Can, Emine; Yavuzyilmaz, Asuman

    2009-01-01

    Purpose: The objectives of this study were to determine the levels of lifetime cigarette use, daily use, and current use among young people (aged 15-19 years) and to examine the risk factors contributing to regular smoking. Methods: The number of students was determined proportionately to the numbers of students in all the high schools in the…

  20. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  1. Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing

    ERIC Educational Resources Information Center

    Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.

    2012-01-01

    Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…

  2. Identifying and Exploiting Spatial Regularity in Data Memory References

    SciTech Connect

    Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A; Schulz, M

    2003-07-24

    The growing processor/memory performance gap causes the performance of many codes to be limited by memory accesses. If known to exist in an application, strided memory accesses forming streams can be targeted by optimizations such as prefetching, relocation, remapping, and vector loads. Undetected, they can be a significant source of memory stalls in loops. Existing stream-detection mechanisms either require special hardware, which may not gather statistics for subsequent analysis, or are limited to compile-time detection of array accesses in loops. Formally, little treatment has been accorded to the subject; the concept of locality fails to capture the existence of streams in a program's memory accesses. The contributions of this paper are as follows. First, we define spatial regularity as a means to discuss the presence and effects of streams. Second, we develop measures to quantify spatial regularity, and we design and implement an on-line, parallel algorithm to detect streams - and hence regularity - in running applications. Third, we use examples from real codes and common benchmarks to illustrate how derived stream statistics can be used to guide the application of profile-driven optimizations. Overall, we demonstrate the benefits of our novel regularity metric as a low-cost instrument to detect potential for code optimizations affecting memory performance.

  3. MIT image reconstruction based on edge-preserving regularization.

    PubMed

    Casanova, R; Silva, A; Borges, A R

    2004-02-01

    Tikhonov regularization has been widely used in electrical tomography to deal with the ill-posedness of the inverse problem. However, due to the fact that discontinuities are strongly penalized, this approach tends to produce blurred images. Recently, a lot of interest has been devoted to methods with edge-preserving properties, such as those related to total variation, wavelets and half-quadratic regularization. In the present work, the performance of an edge-preserving regularization method, called ARTUR, is evaluated in the context of magnetic induction tomography (MIT). ARTUR is a deterministic method based on half-quadratic regularization, where complementary a priori information may be introduced in the reconstruction algorithm by the use of a nonnegativity constraint. The method is first tested using an MIT analytical model that generates projection data given the position, the radius and the magnetic permeability of a single nonconductive cylindrical object. It is shown that even in the presence of strong Gaussian additive noise, it is still able to recover the main features of the object. Secondly, reconstructions based on real data for different configurations of conductive nonmagnetic cylindrical objects are presented and some of their parameters estimated. PMID:15005316

  4. Psychological Benefits of Regular Physical Activity: Evidence from Emerging Adults

    ERIC Educational Resources Information Center

    Cekin, Resul

    2015-01-01

    Emerging adulthood is a transitional stage between late adolescence and young adulthood in life-span development that requires significant changes in people's lives. Therefore, identifying protective factors for this population is crucial. This study investigated the effects of regular physical activity on self-esteem, optimism, and happiness in…

  5. The Student with Albinism in the Regular Classroom.

    ERIC Educational Resources Information Center

    Ashley, Julia Robertson

    This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…

  6. Identifying basketball performance indicators in regular season and playoff games.

    PubMed

    García, Javier; Ibáñez, Sergio J; De Santos, Raúl Martinez; Leite, Nuno; Sampaio, Jaime

    2013-03-01

    The aim of the present study was to identify basketball game performance indicators which best discriminate winners and losers in regular season and playoffs. The sample used was composed by 323 games of ACB Spanish Basketball League from the regular season (n=306) and from the playoffs (n=17). A previous cluster analysis allowed splitting the sample in balanced (equal or below 12 points), unbalanced (between 13 and 28 points) and very unbalanced games (above 28 points). A discriminant analysis was used to identify the performance indicators either in regular season and playoff games. In regular season games, the winning teams dominated in assists, defensive rebounds, successful 2 and 3-point field-goals. However, in playoff games the winning teams' superiority was only in defensive rebounding. In practical applications, these results may help the coaches to accurately design training programs to reflect the importance of having different offensive set plays and also have specific conditioning programs to prepare for defensive rebounding. PMID:23717365

  7. 12 CFR 311.5 - Regular procedure for closing meetings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Regular procedure for closing meetings. 311.5 Section 311.5 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5...

  8. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  9. Involving Impaired, Disabled, and Handicapped Persons in Regular Camp Programs.

    ERIC Educational Resources Information Center

    American Alliance for Health, Physical Education, and Recreation, Washington, DC. Information and Research Utilization Center.

    The publication provides some broad guidelines for serving impaired, disabled, and handicapped children in nonspecialized or regular day and residential camps. Part One on the rationale and basis for integrated camping includes three chapters which cover mainstreaming and the normalization principle, the continuum of services (or Cascade System)…

  10. Regular and homeward travel speeds of arctic wolves

    USGS Publications Warehouse

    Mech, L.D.

    1994-01-01

    Single wolves (Canis lupus arctos), a pair, and a pack of five habituated to the investigator on an all-terrain vehicle were followed on Ellesmere Island, Northwest Territories, Canada, during summer. Their mean travel speed was measured on barren ground at 8.7 km/h during regular travel and 10.0 km/h when returning to a den.

  11. Nonnative Processing of Verbal Morphology: In Search of Regularity

    ERIC Educational Resources Information Center

    Gor, Kira; Cook, Svetlana

    2010-01-01

    There is little agreement on the mechanisms involved in second language (L2) processing of regular and irregular inflectional morphology and on the exact role of age, amount, and type of exposure to L2 resulting in differences in L2 input and use. The article contributes to the ongoing debates by reporting the results of two experiments on Russian…

  12. Multiple Learning Strategies Project. Medical Assistant. [Regular Vocational. Vol. 4.

    ERIC Educational Resources Information Center

    Varney, Beverly; And Others

    This instructional package, one of four designed for regular vocational students, focuses on the vocational area of medical assistant. Contained in this document are forty-seven learning modules organized into nine units: review for competency; third-party billing; patient teaching; skill building; bookkeeping; interpersonal relationships; medical…

  13. Low-Rank Regularization for Learning Gene Expression Programs

    PubMed Central

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets. PMID:24358148

  14. Distances and isomorphisms in 4-regular circulant graphs

    NASA Astrophysics Data System (ADS)

    Donno, Alfredo; Iacono, Donatella

    2016-06-01

    We compute the Wiener index and the Hosoya polynomial of the Cayley graph of some cyclic groups, with all possible generating sets containing four elements, up to isomorphism. We find out that the order 17 is the smallest case providing two non-isomorphic 4-regular circulant graphs with the same Wiener index. Some open problems and questions are listed.

  15. Elementary Teachers' Perspectives of Inclusion in the Regular Education Classroom

    ERIC Educational Resources Information Center

    Olinger, Becky Lorraine

    2013-01-01

    The purpose of this qualitative study was to examine regular education and special education teacher perceptions of inclusion services in an elementary school setting. In this phenomenological study, purposeful sampling techniques and data were used to conduct a study of inclusion in the elementary schools. In-depth one-to-one interviews with 8…

  16. A Unified Approach for Solving Nonlinear Regular Perturbation Problems

    ERIC Educational Resources Information Center

    Khuri, S. A.

    2008-01-01

    This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…

  17. Implicit Learning of L2 Word Stress Regularities

    ERIC Educational Resources Information Center

    Chan, Ricky K. W.; Leung, Janny H. C.

    2014-01-01

    This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…

  18. Acquisition of Formulaic Sequences in Intensive and Regular EFL Programmes

    ERIC Educational Resources Information Center

    Serrano, Raquel; Stengers, Helene; Housen, Alex

    2015-01-01

    This paper aims to analyse the role of time concentration of instructional hours on the acquisition of formulaic sequences in English as a foreign language (EFL). Two programme types that offer the same amount of hours of instruction are considered: intensive (110 hours/1 month) and regular (110 hours/7 months). The EFL learners under study are…

  19. A simple way to measure daily lifestyle regularity

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.

    2002-01-01

    A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5

  20. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. PMID:27114339

  1. Pitch strength of regular-interval click trains with different length “runs” of regular intervals

    PubMed Central

    Yost, William A.; Mapes-Riordan, Dan; Shofner, William; Dye, Raymond; Sheft, Stanley

    2009-01-01

    Click trains were generated with first- and second-order statistics following Kaernbach and Demany [J. Acoust. Soc. Am. 104, 2298–2306 (1998)]. First-order intervals are between successive clicks, while second-order intervals are those between every other click. Click trains were generated with a repeating alternation of fixed and random intervals which produce a pitch at the reciprocal of the duration of the fixed interval. The intervals were then randomly shuffled and compared to the unshuffled, alternating click trains in pitch-strength comparison experiments. In almost all comparisons for the first-order interval stimuli, the shuffled-interval click trains had a stronger pitch strength than the unshuffled-interval click trains. The shuffled-interval click trains only produced stronger pitches for second-order interval stimuli when the click trains were unfiltered. Several experimental conditions and an analysis of runs of regular and random intervals in these click trains suggest that the auditory system is sensitive to runs of regular intervals in a stimulus that contains a mix of regular and random intervals. These results indicate that fine-structure regularity plays a more important role in pitch perception than randomness, and that the long-term autocorrelation function or spectra of these click trains are not good predictors of pitch strength. PMID:15957774

  2. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  3. Light-Front Quantization of the Vector Schwinger Model with a Photon Mass Term in Faddeevian Regularization

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.

    2016-07-01

    In this talk, we study the light-front quantization of the vector Schwinger model with photon mass term in Faddeevian Regularization, describing two-dimensional electrodynamics with mass-less fermions but with a mass term for the U(1) gauge field. This theory is gauge-non-invariant (GNI). We construct a gauge-invariant (GI) theory using Stueckelberg mechanism and then recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-fixing conditions (GFC's). We then study LFQ of this new GI theory.

  4. 20 CFR 220.17 - Recovery from disability for work in the regular occupation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Work in an Employee's Regular Railroad Occupation § 220.17 Recovery from disability for work in the regular occupation. (a) General. Disability for work in the regular occupation will end if— (1) There is... the duties of his or her regular occupation. The Board provides a trial work period before...

  5. 20 CFR 220.17 - Recovery from disability for work in the regular occupation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Work in an Employee's Regular Railroad Occupation § 220.17 Recovery from disability for work in the regular occupation. (a) General. Disability for work in the regular occupation will end if— (1) There is... the duties of his or her regular occupation. The Board provides a trial work period before...

  6. Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms

    SciTech Connect

    Roychowdhury, V.P.; Kailath, T.

    1989-01-01

    The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.

  7. Regularizing the r-mode Problem for Nonbarotropic Relativistic Stars

    NASA Technical Reports Server (NTRS)

    Lockitch, Keith H.; Andersson, Nils; Watts, Anna L.

    2004-01-01

    We present results for r-modes of relativistic nonbarotropic stars. We show that the main differential equation, which is formally singular at lowest order in the slow-rotation expansion, can be regularized if one considers the initial value problem rather than the normal mode problem. However, a more physically motivated way to regularize the problem is to include higher order terms. This allows us to develop a practical approach for solving the problem and we provide results that support earlier conclusions obtained for uniform density stars. In particular, we show that there will exist a single r-mode for each permissible combination of 1 and m. We discuss these results and provide some caveats regarding their usefulness for estimates of gravitational-radiation reaction timescales. The close connection between the seemingly singular relativistic r-mode problem and issues arising because of the presence of co-rotation points in differentially rotating stars is also clarified.

  8. Mechanisms of evolution of avalanches in regular graphs

    NASA Astrophysics Data System (ADS)

    Handford, Thomas P.; Pérez-Reche, Francisco J.; Taraskin, Sergei N.

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  9. Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation.

    PubMed

    Meng, Juan; Hu, Guyu; Li, Dong; Zhang, Yanyan; Pan, Zhisong

    2016-01-01

    Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method. PMID:26819589

  10. Giant regular polyhedra from calixarene carboxylates and uranyl

    PubMed Central

    Pasquale, Sara; Sattin, Sara; Escudero-Adán, Eduardo C.; Martínez-Belmonte, Marta; de Mendoza, Javier

    2012-01-01

    Self-assembly of large multi-component systems is a common strategy for the bottom-up construction of discrete, well-defined, nanoscopic-sized cages. Icosahedral or pseudospherical viral capsids, built up from hundreds of identical proteins, constitute typical examples of the complexity attained by biological self-assembly. Chemical versions of the so-called 5 Platonic regular or 13 Archimedean semi-regular polyhedra are usually assembled combining molecular platforms with metals with commensurate coordination spheres. Here we report novel, self-assembled cages, using the conical-shaped carboxylic acid derivatives of calix[4]arene and calix[5]arene as ligands, and the uranyl cation UO22+ as a metallic counterpart, which coordinates with three carboxylates at the equatorial plane, giving rise to hexagonal bipyramidal architectures. As a result, octahedral and icosahedral anionic metallocages of nanoscopic dimensions are formed with an unusually small number of components. PMID:22510690

  11. Resolving intravoxel fiber architecture using nonconvex regularized blind compressed sensing

    NASA Astrophysics Data System (ADS)

    Chu, C. Y.; Huang, J. P.; Sun, C. Y.; Liu, W. Y.; Zhu, Y. M.

    2015-03-01

    In diffusion magnetic resonance imaging, accurate and reliable estimation of intravoxel fiber architectures is a major prerequisite for tractography algorithms or any other derived statistical analysis. Several methods have been proposed that estimate intravoxel fiber architectures using low angular resolution acquisitions owing to their shorter acquisition time and relatively low b-values. But these methods are highly sensitive to noise. In this work, we propose a nonconvex regularized blind compressed sensing approach to estimate intravoxel fiber architectures in low angular resolution acquisitions. The method models diffusion-weighted (DW) signals as a sparse linear combination of unfixed reconstruction basis functions and introduces a nonconvex regularizer to enhance the noise immunity. We present a general solving framework to simultaneously estimate the sparse coefficients and the reconstruction basis. Experiments on synthetic, phantom, and real human brain DW images demonstrate the superiority of the proposed approach.

  12. Generalization of visual regularities in newly hatched chicks (Gallus gallus).

    PubMed

    Santolin, Chiara; Rosa-Salva, Orsola; Regolin, Lucia; Vallortigara, Giorgio

    2016-09-01

    Evidence of learning and generalization of visual regularities in a newborn organism is provided in the present research. Domestic chicks have been trained to discriminate visual triplets of simultaneously presented shapes, implementing AAB versus ABA (Experiment 1), AAB versus ABB and AAB versus BAA (Experiment 2). Chicks distinguished pattern-following and pattern-violating novel test triplets in all comparisons, showing no preference for repetition-based patterns. The animals generalized to novel instances even when the patterns compared were not discriminable by the presence or absence of reduplicated elements or by symmetry (e.g., AAB vs. ABB). These findings represent the first evidence of learning and generalization of regularities at the onset of life in an animal model, revealing intriguing differences with respect to human newborns and infants. Extensive prior experience seems to be unnecessary to drive the process, suggesting that chicks are predisposed to detect patterns characterizing the visual world. PMID:27287627

  13. Special education and the regular education initiative: basic assumptions.

    PubMed

    Jenkins, J R; Pious, C G; Jewell, M

    1990-04-01

    The regular education initiative (REI) is a thoughtful response to identified problems in our system for educating low-performing children, but it is a not a detailed blueprint for changing the system. Educators must achieve consensus on what the REI actually proposes. The authors infer from the REI literature five assumptions regarding the roles and responsibilities of elementary regular classroom teachers, concluding that these teachers and specialists form a partnership, but the classroom teachers are ultimately in charge of the instruction of all children in their classrooms, including those who are not succeeding in the mainstream. A discussion of the target population and of several partnership models further delineates REI issues and concerns. PMID:2185027

  14. Boundary values of the Schwarzian derivative of a regular function

    SciTech Connect

    Dubinin, Vladimir N

    2011-05-31

    Regular functions f in the half-plane Imz>0 admitting an asymptotic expansion f(z)=c{sub 1}z+c{sub 2}z{sup 2}+c{sub 3}z{sup 3}+{gamma}(z)z{sup 3}, where c{sub 1}>0, Imc{sub 2}=0 and the angular limit regular self-maps and its generalizations due to Tauraso, Vlacci and Shoikhet. Bibliography: 16 titles.

  15. Regular Expression-Based Learning for METs Value Extraction.

    PubMed

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673

  16. The Unique Maximal GF-Regular Submodule of a Module

    PubMed Central

    Abduldaim, Areej M.; Chen, Sheng

    2013-01-01

    An R-module A is called GF-regular if, for each a ∈ A and r ∈ R, there exist t ∈ R and a positive integer n such that rntrna = rna. We proved that each unitary R-module A contains a unique maximal GF-regular submodule, which we denoted by MGF(A). Furthermore, the radical properties of A are investigated; we proved that if A is an R-module and K is a submodule of A, then MGF(K) = K∩MGF(A). Moreover, if A is projective, then MGF(A) is a G-pure submodule of A and MGF(A) = M(R) · A. PMID:24163628

  17. Tikhonov regularization-based operational transfer path analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Wei; Lu, Yingying; Zhang, Zhousuo

    2016-06-01

    To overcome ill-posed problems in operational transfer path analysis (OTPA), and improve the stability of solutions, this paper proposes a novel OTPA based on Tikhonov regularization, which considers both fitting degrees and stability of solutions. Firstly, fundamental theory of Tikhonov regularization-based OTPA is presented, and comparative studies are provided to validate the effectiveness on ill-posed problems. Secondly, transfer path analysis and source contribution evaluations for numerical cases studies on spherical radiating acoustical sources are comparatively studied. Finally, transfer path analysis and source contribution evaluations for experimental case studies on a test bed with thin shell structures are provided. This study provides more accurate transfer path analysis for mechanical systems, which can benefit for vibration reduction by structural path optimization. Furthermore, with accurate evaluation of source contributions, vibration monitoring and control by active controlling vibration sources can be effectively carried out.

  18. Detection of Fukushima plume within regular Slovenian environmental radioactivity surveillance.

    PubMed

    Glavič-Cindro, D; Benedik, L; Kožar Logar, J; Vodenik, B; Zorko, B

    2013-11-01

    After the Fukushima accident aerosol and rain water samples collected within regular national monitoring programmes were carefully analysed. In rain water samples, aerosol and iodine filters collected in the second half of March and in April 2011 I-131, Cs-134 and Cs-137 were detected. In May 2011 the activities of I-131 and Cs-134 were close or below the detection limit and Cs-137 reached values from the period before the Fukushima accident. Additionally plutonium and americium activity concentrations in aerosol filters were analysed. These measured data were compared with measured data after the Chernobyl contamination in Slovenia in 1986. We can conclude that with adequate regular monitoring programmes influences of radioactivity contamination due to nuclear accidents worldwide can be properly assessed. PMID:23611815

  19. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    NASA Astrophysics Data System (ADS)

    Hopper, Christopher P.

    2016-05-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A -harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  20. Total Variation Regularization Used in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Wang, Huaxiang; Tang, Lei

    2007-06-01

    To solve ill-posed problem and poor resolution in electrical capacitance tomography (ECT), a new image reconstruction algorithm based on total variation (TV) regularization is proposed and a new self-adaptive mesh refinement strategy is put forward. Compared with the conventional Tikhonov regularization, this new algorithm not only stabilizes the reconstruction, but also enhances the distinguishability of the reconstruction image in areas with discontinuous medium distribution. It possesses a good edge-preserving property. The self-adaptive mesh generation technique based on this algorithm can refine the mesh automatically in specific areas according to medium distribution. This strategy keeps high resolution as refining all elements over the region but reduces calculation loads, thereby speeds up the reconstruction. Both simulation and experimental results show that this algorithm has advantages in terms of the resolution and real-time performance.