BRST quantization of Polyakov's two-dimensional gravity
NASA Astrophysics Data System (ADS)
Itoh, Katsumi
1990-10-01
Two-dimensional gravity coupled to minimal models is quantized in the chiral gauge by the BRST method. By using the Wakimoto construction for the gravity sector, we show how the quartet mechanism of Kugo and Ojima works and solve the physical state condition. As a result the positive semi-definiteness of the physical subspace is shown. The formula of Knizhnik et al. for gravitational scaling dimensions is rederived from the physical state condition. We also observe a relation between the chiral gauge and the conformal gauge.
Dimensional Regularization is Generic
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
The absence of the quadratic divergence in the Higgs sector of the Standard Model in the dimensional regularization is usually regarded to be an exceptional property of a specific regularization. To understand what is going on in the dimensional regularization, we illustrate how to reproduce the results of the dimensional regularization for the λϕ4 theory in the more conventional regularization such as the higher derivative regularization; the basic postulate involved is that the quadratically divergent induced mass, which is independent of the scale change of the physical mass, is kinematical and unphysical. This is consistent with the derivation of the Callan-Symanzik equation, which is a comparison of two theories with slightly different masses, for the λϕ4 theory without encountering the quadratic divergence. In this sense the dimensional regularization may be said to be generic in a bottom-up approach starting with a successful low energy theory. We also define a modified version of the mass independent renormalization for a scalar field which leads to the homogeneous renormalization group equation. Implications of the present analysis on the Standard Model at high energies and the presence or absence of SUSY at LHC energies are briey discussed.
Physical model of dimensional regularization
NASA Astrophysics Data System (ADS)
Schonfeld, Jonathan F.
2016-12-01
We explicitly construct fractals of dimension 4{-}ɛ on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity.
Dimensional regularization in configuration space
Bollini, C.G. |; Giambiagi, J.J.
1996-05-01
Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}
A Sim(2) invariant dimensional regularization
NASA Astrophysics Data System (ADS)
Alfaro, J.
2017-09-01
We introduce a Sim (2) invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM).
Multiloop integrals in dimensional regularization made simple.
Henn, Johannes M
2013-06-21
Scattering amplitudes at loop level can be expressed in terms of Feynman integrals. The latter satisfy partial differential equations in the kinematical variables. We argue that a good choice of basis for (multi)loop integrals can lead to significant simplifications of the differential equations, and propose criteria for finding an optimal basis. This builds on experience obtained in supersymmetric field theories that can be applied successfully to generic quantum field theory integrals. It involves studying leading singularities and explicit integral representations. When the differential equations are cast into canonical form, their solution becomes elementary. The class of functions involved is easily identified, and the solution can be written down to any desired order in ϵ within dimensional regularization. Results obtained in this way are particularly simple and compact. In this Letter, we outline the general ideas of the method and apply them to a two-loop example.
Recollections on Dimensional Regularization and Related Topics
NASA Astrophysics Data System (ADS)
Bollini, Carlos Guido
Professor Juan José Giambiagi and I started working on divergent diagrams in different number of dimensions in 1970. We had a certain idea about the behavior in odd or even number of dimensions, but the most important factor in our work, I think, was the previous experience with an analytical regularization method. We had developed it a few years before. Within this method the amplitudes turned out to be analytic functions of the regularizing parameter, with poles at the physical value of that parameter…
Lattice calculation of the Polyakov loop and Polyakov loop correlators
NASA Astrophysics Data System (ADS)
Weber, Johannes Heinrich
2017-03-01
We discuss calculations of the Polyakov loop and of Polyakov loop correlators using lattice gauge theory. We simulate QCD with 2+1 flavors and almost physical quark masses using the highly improved staggered quark action (HISQ).We demonstrate that the entropy derived from the Polyakov loop is a good probe of color screening. In particular, it allows for scheme independent and quantitative conclusions about the deconfinement aspects of the crossover and for a rigorous study of the onset of weak-coupling behavior at high temperatures. We examine the correlators for small and large separations and identify vacuum-like and screening regimes in the thermal medium. We demonstrate that gauge-independent screening properties can be obtained even from gauge-fixed singlet correlators and that we can pin down the asymptotic regime.
Two-dimensional chiral anomaly in differential regularization
NASA Astrophysics Data System (ADS)
Chen, W. F.
1999-07-01
The two-dimensional chiral anomaly is calculated using differential regularization. It is shown that the anomaly emerges naturally in the vector and axial Ward identities on the same footing as the four-dimensional case. The vector gauge symmetry can be achieved by an appropriate choice of the mass scales without introducing the seagull term. We have analyzed the reason why such a universal result can be obtained in differential regularization.
Polyakov loop modeling for hot QCD
NASA Astrophysics Data System (ADS)
Fukushima, Kenji; Skokov, Vladimir
2017-09-01
We review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.
Lifshitz anomalies, Ward identities and split dimensional regularization
NASA Astrophysics Data System (ADS)
Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia
2017-03-01
We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2 + 1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Six-dimensional regularization of chiral gauge theories
NASA Astrophysics Data System (ADS)
Fukaya, Hidenori; Onogi, Tetsuya; Yamamoto, Shota; Yamamura, Ryo
2017-03-01
We propose a regularization of four-dimensional chiral gauge theories using six-dimensional Dirac fermions. In our formulation, we consider two different mass terms having domain-wall profiles in the fifth and the sixth directions, respectively. A Weyl fermion appears as a localized mode at the junction of two different domain walls. One domain wall naturally exhibits the Stora-Zumino chain of the anomaly descent equations, starting from the axial U(1) anomaly in six dimensions to the gauge anomaly in four dimensions. Another domain wall implies a similar inflow of the global anomalies. The anomaly-free condition is equivalent to requiring that the axial U(1) anomaly and the parity anomaly are canceled among the six-dimensional Dirac fermions. Since our formulation is based on a massive vector-like fermion determinant, a nonperturbative regularization will be possible on a lattice. Putting the gauge field at the four-dimensional junction and extending it to the bulk using the Yang-Mills gradient flow, as recently proposed by Grabowska and Kaplan, we define the four-dimensional path integral of the target chiral gauge theory.
Deterministic regularization of three-dimensional optical diffraction tomography
Sung, Yongjin; Dasari, Ramachandra R.
2012-01-01
In this paper we discuss a deterministic regularization algorithm to handle the missing cone problem of three-dimensional optical diffraction tomography (ODT). The missing cone problem arises in most practical applications of ODT and is responsible for elongation of the reconstructed shape and underestimation of the value of the refractive index. By applying positivity and piecewise-smoothness constraints in an iterative reconstruction framework, we effectively suppress the missing cone artifact and recover sharp edges rounded out by the missing cone, and we significantly improve the accuracy of the predictions of the refractive index. We also show the noise handling capability of our algorithm in the reconstruction process. PMID:21811316
Polyakov loop correlator in perturbation theory
NASA Astrophysics Data System (ADS)
Berwein, Matthias; Brambilla, Nora; Petreczky, Péter; Vairo, Antonio
2017-07-01
We study the Polyakov loop correlator in the weak coupling expansion and show how the perturbative series reexponentiates into singlet and adjoint contributions. We calculate the order g7 correction to the Polyakov loop correlator in the short distance limit. We show how the singlet and adjoint free energies arising from the reexponentiation formula of the Polyakov loop correlator are related to the gauge invariant singlet and octet free energies that can be defined in pNRQCD, namely we find that the two definitions agree at leading order in the multipole expansion, but differ at first order in the quark-antiquark distance.
Matching effective chiral Lagrangians with dimensional and lattice regularizations
NASA Astrophysics Data System (ADS)
Niedermayer, F.; Weisz, P.
2016-04-01
We compute the free energy in the presence of a chemical potential coupled to a conserved charge in effective O( n) scalar field theory (without explicit symmetry breaking terms) to third order for asymmetric volumes in general d-dimensions, using dimensional (DR) and lattice regularizations. This yields relations between the 4-derivative couplings appearing in the effective actions for the two regularizations, which in turn allows us to translate results, e.g. the mass gap in a finite periodic box in d = 3 + 1 dimensions, from one regularization to the other. Consistency is found with a new direct computation of the mass gap using DR. For the case n = 4 , d = 4 the model is the low-energy effective theory of QCD with N f = 2 massless quarks. The results can thus be used to obtain estimates of low energy constants in the effective chiral Lagrangian from measurements of the low energy observables, including the low lying spectrum of N f = 2 QCD in the δ-regime using lattice simulations, as proposed by Peter Hasenfratz, or from the susceptibility corresponding to the chemical potential used.
Grats, Yu. V. Spirin, P. A.
2016-01-15
The self-energy of a classical charged particle localized at a relatively large distance outside the event horizon of an (n + 1)-dimensional Schwarzschild–Tangherlini black hole for an arbitrary n ≥ 3 is calculated. An expression for the electrostatic Green function is derived in the first two orders of the perturbation theory. Dimensional regularization is proposed to be used to regularize the corresponding formally divergent expression for the self-energy. The derived expression for the renormalized self-energy is compared with the results of other authors.
Effective potential for Polyakov loops in lattice QCD
NASA Astrophysics Data System (ADS)
Nemoto, Y.; RBC Collaboration
2003-05-01
Toward the derivation of an effective theory for Polyakov loops in lattice QCD, we examine Polyakov loop correlation functions using the multi-level algorithm which was recently developed by Luscher and Weisz.
Polyakov loop and correlator of Polyakov loops at next-to-next-to-leading order
Brambilla, Nora; Vairo, Antonio; Ghiglieri, Jacopo; Petreczky, Peter
2010-10-01
We study the Polyakov loop and the correlator of two Polyakov loops at finite temperature in the weak-coupling regime. We calculate the Polyakov loop at order g{sup 4}. The calculation of the correlator of two Polyakov loops is performed at distances shorter than the inverse of the temperature and for electric screening masses larger than the Coulomb potential. In this regime, it is accurate up to order g{sup 6}. We also evaluate the Polyakov-loop correlator in an effective field theory framework that takes advantage of the hierarchy of energy scales in the problem and makes explicit the bound-state dynamics. In the effective field theory framework, we show that the Polyakov-loop correlator is at leading order in the multipole expansion the sum of a color-singlet and a color-octet quark-antiquark correlator, which are gauge invariant, and compute the corresponding color-singlet and color-octet free energies.
Regularized and generalized solutions of infinite-dimensional stochastic problems
Alshanskiy, Maxim A; Mel'nikova, Irina V
2011-11-30
The paper is concerned with solutions of Cauchy's problem for stochastic differential-operator equations in separable Hilbert spaces. Special emphasis is placed on the case when the operator coefficient of the equation is not a generator of a C{sub 0}-class semigroup, but rather generates some regularized semigroup. Regularized solutions of equations in the Ito form with a Wiener process as an inhomogeneity and generalized solutions of equations with white noise are constructed in various spaces of abstract distributions. Bibliography: 23 titles.
Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S
2014-05-01
We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.
Critical phenomena in the majority voter model on two-dimensional regular lattices.
Acuña-Lara, Ana L; Sastre, Francisco; Vargas-Arriola, José Raúl
2014-05-01
In this work we studied the critical behavior of the critical point as a function of the number of nearest neighbors on two-dimensional regular lattices. We performed numerical simulations on triangular, hexagonal, and bilayer square lattices. Using standard finite-size scaling theory we found that all cases fall in the two-dimensional Ising model universality class, but that the critical point value for the bilayer lattice does not follow the regular tendency that the Ising model shows.
Phase structure of the Polyakov-quark-meson model
NASA Astrophysics Data System (ADS)
Schaefer, B.-J.; Pawlowski, J. M.; Wambach, J.
2007-10-01
The relation between the deconfinement and chiral phase transition is explored in the framework of a Polyakov-loop-extended two-flavor quark-meson (PQM) model. In this model the Polyakov loop dynamics is represented by a background temporal gauge field which also couples to the quarks. As a novelty an explicit quark chemical potential and Nf-dependence in the Polyakov loop potential is proposed by using renormalization group arguments. The behavior of the Polyakov loop as well as the chiral condensate as function of temperature and quark chemical potential is obtained by minimizing the grand canonical thermodynamic potential of the system. The effect of the Polyakov loop dynamics on the chiral phase diagram and on several thermodynamic bulk quantities is presented.
Managing γ 5 in Dimensional Regularization II: the Trace with more γ 5's
NASA Astrophysics Data System (ADS)
Ferrari, Ruggero
2017-03-01
In the present paper we evaluate the anomaly for the abelian axial current in a non abelian chiral gauge theory, by using dimensional regularization. This amount to formulate a procedure for managing traces with more than one γ 5. The suggested procedure obeys Lorentz covariance and cyclicity, at variance with previous approaches (e.g. the celebrated 't Hooft and Veltman's where Lorentz is violated). The result of the present paper is a further step forward in the program initiated by a previous work on the traces involving a single γ 5. The final goal is an unconstrained definition of γ 5 in dimensional regularization. Here, in the evaluation of the anomaly, we profit of the axial current conservation equation, when radiative corrections are neglected. This kind of tool is not always exploited in field theories with γ 5, e.g. in the use of dimensional regularization of infrared and collinear divergences.
Algamal, Zakariya Yahya; Lee, Muhammad Hisyam
2015-12-01
Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nonet meson properties in the Nambu-Jona-Lasinio model with dimensional versus cutoff regularization
Inagaki, T.; Kimura, D.; Kohyama, H.; Kvinikhidze, A.
2011-02-01
The Nambu-Jona-Lasinio model with a Kobayashi-Maskawa-'t Hooft term is one low energy effective theory of QCD which includes the U{sub A}(1) anomaly. We investigate nonet meson properties in this model with three flavors of quarks. We employ two types of regularizations, the dimensional and sharp cutoff ones. The model parameters are fixed phenomenologically for each regularization. Evaluating the kaon decay constant, the {eta} meson mass and the topological susceptibility, we show the regularization dependence of the results and discuss the applicability of the Nambu-Jona-Lasinio model.
NASA Astrophysics Data System (ADS)
Liao, Xian; Zhang, Ping
2016-06-01
Regarding P.-L. Lions' open question in Oxford Lecture Series in Mathematics and its Applications, Vol. 3 (1996) concerning the propagation of regularity for the density patch, we establish the global existence of solutions to the two-dimensional inhomogeneous incompressible Navier-Stokes system with initial density given by {(1 - η){1}_{{Ω}0} + {1}_{{Ω}0c}} for some small enough constant {η} and some {W^{k+2,p}} domain {Ω0}, with initial vorticity belonging to {L1 \\cap Lp} and with appropriate tangential regularities. Furthermore, we prove that the regularity of the domain {Ω_0} is preserved by time evolution.
Fujihara, T.; Kimura, D.; Inagaki, T.; Kvinikhidze, A.
2009-05-01
We investigate color superconducting phase at high density in the extended Nambu-Jona-Lasinio model for two-flavor quarks. Because of the nonrenormalizability of the model, physical observables may depend on the regularization procedure; that is why we apply two types of regularization, the cutoff and the dimensional one to evaluate the phase structure, the equation of state, and the relationship between the mass and the radius of a dense star. To obtain the phase structure we evaluate the minimum of the effective potential at finite temperature and chemical potential. The stress tensor is calculated to derive the equation of state. Solving the Tolman-Oppenheimer-Volkoff equation, we show the relationship between the mass and the radius of a dense star. The dependence on the regularization is found not to be small, interestingly. The dimensional regularization predicts color superconductivity phase at rather large values of {mu} (in agreement with perturbative QCD in contrast to the cutoff regularization), in the larger temperature interval, the existence of heavier and larger quark stars.
Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization
NASA Astrophysics Data System (ADS)
Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna
2014-12-01
We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.
Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models.
Liu, Han; Roeder, Kathryn; Wasserman, Larry
2010-12-31
A challenging problem in estimating high-dimensional graphical models is to choose the regularization parameter in a data-dependent way. The standard techniques include K-fold cross-validation (K-CV), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Though these methods work well for low-dimensional problems, they are not suitable in high dimensional settings. In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs. The method has a clear interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. This interpretation requires essentially no conditions. Under mild conditions, we show that StARS is partially sparsistent in terms of graph estimation: i.e. with high probability, all the true edges will be included in the selected model even when the graph size diverges with the sample size. Empirically, the performance of StARS is compared with the state-of-the-art model selection procedures, including K-CV, AIC, and BIC, on both synthetic data and a real microarray dataset. StARS outperforms all these competing procedures.
NASA Astrophysics Data System (ADS)
Phillips, D. R.; Afnan, I. R.; Henry-Edwards, A. G.
2000-04-01
Dimensional regularization is applied to the Lippmann-Schwinger equation for a separable potential which gives rise to logarithmic singularities in the Born series. For this potential a subtraction at a fixed energy can be used to renormalize the amplitude and produce a finite solution to the integral equation for all energies. This can be done either algebraically or numerically. In the latter case dimensional regularization can be implemented by solving the integral equation in a lower number of dimensions, fixing the potential strength, and computing the phase shifts, while taking the limit as the number of dimensions approaches three. We demonstrate that these steps can be carried out in a numerically stable way, and show that the results thereby obtained agree with those found when the renormalization is performed algebraically to four significant figures.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Visualizations of coherent center domains in local Polyakov loops
Stokes, Finn M. Kamleh, Waseem; Leinweber, Derek B.
2014-09-15
Quantum Chromodynamics exhibits a hadronic confined phase at low to moderate temperatures and, at a critical temperature T{sub C}, undergoes a transition to a deconfined phase known as the quark–gluon plasma. The nature of this deconfinement phase transition is probed through visualizations of the Polyakov loop, a gauge independent order parameter. We produce visualizations that provide novel insights into the structure and evolution of center clusters. Using the HMC algorithm the percolation during the deconfinement transition is observed. Using 3D rendering of the phase and magnitude of the Polyakov loop, the fractal structure and correlations are examined. The evolution of the center clusters as the gauge fields thermalize from below the critical temperature to above it are also exposed. We observe deconfinement proceeding through a competition for the dominance of a particular center phase. We use stout-link smearing to remove small-scale noise in order to observe the large-scale evolution of the center clusters. A correlation between the magnitude of the Polyakov loop and the proximity of its phase to one of the center phases of SU(3) is evident in the visualizations. - Highlights: • We produce visualizations of center clusters in Polyakov loops. • The evolution of center clusters with HMC simulation time is examined. • Visualizations provide novel insights into the percolation of center clusters. • The magnitude and phase of the Polyakov loop are studied. • A correlation between the magnitude and center phase proximity is evident.
Franklin, Jessica M; Eddings, Wesley; Glynn, Robert J; Schneeweiss, Sebastian
2015-10-01
Selection and measurement of confounders is critical for successful adjustment in nonrandomized studies. Although the principles behind confounder selection are now well established, variable selection for confounder adjustment remains a difficult problem in practice, particularly in secondary analyses of databases. We present a simulation study that compares the high-dimensional propensity score algorithm for variable selection with approaches that utilize direct adjustment for all potential confounders via regularized regression, including ridge regression and lasso regression. Simulations were based on 2 previously published pharmacoepidemiologic cohorts and used the plasmode simulation framework to create realistic simulated data sets with thousands of potential confounders. Performance of methods was evaluated with respect to bias and mean squared error of the estimated effects of a binary treatment. Simulation scenarios varied the true underlying outcome model, treatment effect, prevalence of exposure and outcome, and presence of unmeasured confounding. Across scenarios, high-dimensional propensity score approaches generally performed better than regularized regression approaches. However, including the variables selected by lasso regression in a regular propensity score model also performed well and may provide a promising alternative variable selection method.
Random packing of regular polygons and star polygons on a flat two-dimensional surface.
Cieśla, Michał; Barbasz, Jakub
2014-08-01
Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.
NASA Astrophysics Data System (ADS)
Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu
2016-12-01
Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.
Optically programmable encoder based on light propagation in two-dimensional regular nanoplates.
Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin
2017-04-07
We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as '1', than the dark section, defined as '0', along the edge. These '0' and '1' are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.
Optically programmable encoder based on light propagation in two-dimensional regular nanoplates
NASA Astrophysics Data System (ADS)
Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin
2017-04-01
We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as ‘1’, than the dark section, defined as ‘0’, along the edge. These ‘0’ and ‘1’ are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.
Local feedback regularization of three-dimensional Navier-Stokes equations on bounded domains
NASA Astrophysics Data System (ADS)
Balogh, Andras
One of the outstanding open problems in applied mathematics is the question of well-posedness of the initial boundary value problem associated with the three-dimensional fluid flow. At the same time, due to important applications in control theory, numerical analysis and turbulence, various types of regularizations and controls are gaining new interest. The specific problem we consider here is inspired by recent advances in the control of nonlinear distributed parameter systems and its possible applications to hydrodynamics. The main objective is to investigate the extent to which the 3-dimensional Navier-Stokes system can be regularized using a particular, physically motivated, feedback control law. The feedback is introduced in the form of an additional nonlinear viscosity term. Since control over the whole domain is not feasible in general, i.e., it is not usually possible to measure the entire state of the system, we consider a feedback supported only on a subdomain. On the rest of the domain the classical Navier-Stokes equations govern the fluid flow. The additional viscosity term is physically meaningful in the sense that it is proportional to the energy dissipation functional on the subdomain. For the controlled system we prove the existence, uniqueness and stability of the strong solution for initial data and forcing term which are arbitrary on the subdomain of control and are sufficiently small (in appropriate function spaces) outside this subdomain.
Müller-Stich, Beat P; Löb, Nicole; Wald, Diana; Bruckner, Thomas; Meinzer, Hans-Peter; Kadmon, Martina; Büchler, Markus W; Fischer, Lars
2013-09-25
Three-dimensional (3D) presentations enhance the understanding of complex anatomical structures. However, it has been shown that two dimensional (2D) "key views" of anatomical structures may suffice in order to improve spatial understanding. The impact of real 3D images (3Dr) visible only with 3D glasses has not been examined yet. Contrary to 3Dr, regular 3D images apply techniques such as shadows and different grades of transparency to create the impression of 3D.This randomized study aimed to define the impact of both the addition of key views to CT images (2D+) and the use of 3Dr on the identification of liver anatomy in comparison with regular 3D presentations (3D). A computer-based teaching module (TM) was used. Medical students were randomized to three groups (2D+ or 3Dr or 3D) and asked to answer 11 anatomical questions and 4 evaluative questions. Both 3D groups had animated models of the human liver available to them which could be moved in all directions. 156 medical students (57.7% female) participated in this randomized trial. Students exposed to 3Dr and 3D performed significantly better than those exposed to 2D+ (p < 0.01, ANOVA). There were no significant differences between 3D and 3Dr and no significant gender differences (p > 0.1, t-test). Students randomized to 3D and 3Dr not only had significantly better results, but they also were significantly faster in answering the 11 anatomical questions when compared to students randomized to 2D+ (p < 0.03, ANOVA). Whether or not "key views" were used had no significant impact on the number of correct answers (p > 0.3, t-test). This randomized trial confirms that regular 3D visualization improve the identification of liver anatomy.
Seiberg-Witten and 'Polyakov-like' Magnetic Bion Confinements are Continuously Connected
Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.
2012-06-01
We study four-dimensional N = 2 supersymmetric pure-gauge (Seiberg-Witten) theory and its N = 1 mass perturbation by using compactification on S{sup 1} x R{sup 3}. It is well known that on R{sup 4} (or at large S{sup 1} size L) the perturbed theory realizes confinement through monopole or dyon condensation. At small S{sup 1}, we demonstrate that confinement is induced by a generalization of Polyakov's three-dimensional instanton mechanism to a locally four-dimensional theory - the magnetic bion mechanism - which also applies to a large class of nonsupersymmetric theories. Using a large- vs. small-L Poisson duality, we show that the two mechanisms of confinement, previously thought to be distinct, are in fact continuously connected.
RENORMALIZATION OF POLYAKOV LOOPS IN FUNDAMENTAL AND HIGHER REPRESENTATIONS
KACZMAREK,O.; GUPTA, S.; HUEBNER, K.
2007-07-30
We compare two renormalization procedures, one based on the short distance behavior of heavy quark-antiquark free energies and the other by using bare Polyakov loops at different temporal entent of the lattice and find that both prescriptions are equivalent, resulting in renormalization constants that depend on the bare coupling. Furthermore these renormalization constants show Casimir scaling for higher representations of the Polyakov loops. The analysis of Polyakov loops in different representations of the color SU(3) group indicates that a simple perturbative inspired relation in terms of the quadratic Casimir operator is realized to a good approximation at temperatures T{approx}>{Tc}, for renormalized as well as bare loops. In contrast to a vanishing Polyakov loop in representations with non-zero triality in the confined phase, the adjoint loops are small but non-zero even for temperatures below the critical one. The adjoint quark-antiquark pairs exhibit screening. This behavior can be related to the binding energy of glue-lump states.
Mechanics of shear banding in a regularized two-dimensional model of a granular medium
NASA Astrophysics Data System (ADS)
Hunt, G. W.; Hammond, J.
2012-10-01
A regularized two-dimensional model for the buckling of force chains is presented, comprising identical rigid discs sitting initially in a conventional close-packed arrangement. As linear elastic constitutive laws are used throughout, the only nonlinearity in the system comes from large rotations as the resulting force chains are obliged to buckle under imposed end-shortening. The evolving deflected shapes are seen to develop and interact in a highly complex bifurcation structure. Analysis by the nonlinear continuation code Auto exposes at realistic load levels an energy landscape rich in local minima. A number of such states are identified, amongst them families of solutions with the familiar appearance of shear bands over a finite number of discs. A well-known "snakes and ladders" pattern is identified as the mechanism for the addition of extra discs to increase the width of the band.
Regularization of the two-dimensional filter diagonalization method: FDM2K
Chen; Mandelshtam; Shaka
2000-10-01
We outline an important advance in the problem of obtaining a two-dimensional (2D) line list of the most prominent features in a 2D high-resolution NMR spectrum in the presence of noise, when using the Filter Diagonalization Method (FDM) to sidestep limitations of conventional FFT processing. Although respectable absorption-mode spectra have been obtained previously by the artifice of "averaging" several FDM calculations, no 2D line list could be directly obtained from the averaged spectrum, and each calculation produced numerical artifacts that were demonstrably inconsistent with the measured data, but which could not be removed a posteriori. By regularizing the intrinsically ill-defined generalized eigenvalue problem that FDM poses, in a particular quite plausible way, features that are weak or stem from numerical problems are attenuated, allowing better characterization of the dominant spectral features. We call the new algorithm FDM2K. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Arikan, Orhan
1994-05-01
Well bore measurements of conductivity, gravity, and surface measurements of magnetotelluric fields can be modeled as a two-dimensional integral equation with additive measurement noise. The governing integral equation has the form of convolution in the first dimension and projection in the second dimension. However, these two operations are not in separable form. In these applications, given a set of measurements, efficient and robust estimation of the underlying physical property is required. For this purpose, a regularized inversion algorithm for the governing integral equation is presented in this paper. Singular value decomposition of the measurement kernels is used to exploit convolution-projection structure of the integral equation, leading to a form where measurements are related to the physical property by a two-stage operation: projection followed by convolution. On the other hand, estimation of the physical property can be carried out by a two-stage inversion algorithm: deconvolution followed by back projection. A regularization method for the required multichannel deconvolution is given. Some important details of the algorithm are addressed in an application to wellbore induction measurements of conductivity.
REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*
Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171
Duality and the Knizhnik-Polyakov-Zamolodchikov relation in Liouville quantum gravity.
Duplantier, Bertrand; Sheffield, Scott
2009-04-17
We present a (mathematically rigorous) probabilistic and geometrical proof of the Knizhnik-Polyakov-Zamolodchikov relation between scaling exponents in a Euclidean planar domain D and in Liouville quantum gravity. It uses the properly regularized quantum area measure dmicro_{gamma}=epsilon;{gamma;{2}/2}e;{gammah_{epsilon}(z)}dz, where dz is the Lebesgue measure on D, gamma is a real parameter, 0
Polyakov-Nambu-Jona-Lasinio model in finite volumes
NASA Astrophysics Data System (ADS)
Bhattacharyya, Abhijit; Ghosh, Sanjay K.; Ray, Rajarshi; Saha, Kinkar; Upadhaya, Sudipa
2016-12-01
We discuss the 2+1 flavor Polyakov loop enhanced Nambu-Jona-Lasinio model in a finite volume. The main objective is to check the volume scaling of thermodynamic observables for various temperatures and chemical potentials. We observe the possible violation of the scaling with system size in a considerable window along the whole transition region in the T\\text-μq plane.
Globally regular instability of 3-dimensional anti-de Sitter spacetime.
Bizoń, Piotr; Jałmużna, Joanna
2013-07-26
We consider three-dimensional anti-de Sitter (AdS) gravity minimally coupled to a massless scalar field and study numerically the evolution of small smooth circularly symmetric perturbations of the AdS3 spacetime. As in higher dimensions, for a large class of perturbations, we observe a turbulent cascade of energy to high frequencies which entails instability of AdS3. However, in contrast to higher dimensions, the cascade cannot be terminated by black hole formation because small perturbations have energy below the black hole threshold. This situation appears to be challenging for the cosmic censor. Analyzing the energy spectrum of the cascade we determine the width ρ(t) of the analyticity strip of solutions in the complex spatial plane and argue by extrapolation that ρ(t) does not vanish in finite time. This provides evidence that the turbulence is too weak to produce a naked singularity and the solutions remain globally regular in time, in accordance with the cosmic censorship hypothesis.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Yao, Bing; Yang, Hui
2016-12-14
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Yao, Bing; Yang, Hui
2016-01-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods. PMID:27966576
Volume dependence of two-dimensional large-N QCD with a nonzero density of baryons
Bringoltz, Barak
2009-05-15
We take a first step towards the solution of QCD in 1+1 dimensions at nonzero density. We regularize the theory in the UV by using a lattice and in the IR by putting the theory in a box of spatial size L. After fixing to axial gauge we use the coherent states approach to obtain the large-N classical Hamiltonian H that describes color neutral quark-antiquark pairs interacting with spatial Polyakov loops in the background of baryons. Minimizing H we get a regularized form of the 't Hooft equation that depends on the expectation values of the Polyakov loops. Analyzing the L dependence of this equation we show how volume independence, a la Eguchi and Kawai, emerges in the large-N limit, and how it depends on the expectation values of the Polyakov loops. We describe how this independence relies on the realization of translation symmetry, in particular, when the ground state contains a baryon crystal. Finally, we remark on the implications of our results on studying baryon density in large-N QCD within single-site lattice theories and on some general lessons concerning the way four-dimensional large-N QCD behaves in the presence of baryons.
Shape and Symmetry Determine Two-Dimensional Melting Transitions of Hard Regular Polygons
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Antonaglia, James; Millan, Jaime A.; Engel, Michael; Glotzer, Sharon C.
2017-04-01
The melting transition of two-dimensional systems is a fundamental problem in condensed matter and statistical physics that has advanced significantly through the application of computational resources and algorithms. Two-dimensional systems present the opportunity for novel phases and phase transition scenarios not observed in 3D systems, but these phases depend sensitively on the system and, thus, predicting how any given 2D system will behave remains a challenge. Here, we report a comprehensive simulation study of the phase behavior near the melting transition of all hard regular polygons with 3 ≤n ≤14 vertices using massively parallel Monte Carlo simulations of up to 1 ×106 particles. By investigating this family of shapes, we show that the melting transition depends upon both particle shape and symmetry considerations, which together can predict which of three different melting scenarios will occur for a given n . We show that systems of polygons with as few as seven edges behave like hard disks; they melt continuously from a solid to a hexatic fluid and then undergo a first-order transition from the hexatic phase to the isotropic fluid phase. We show that this behavior, which holds for all 7 ≤n ≤14 , arises from weak entropic forces among the particles. Strong directional entropic forces align polygons with fewer than seven edges and impose local order in the fluid. These forces can enhance or suppress the discontinuous character of the transition depending on whether the local order in the fluid is compatible with the local order in the solid. As a result, systems of triangles, squares, and hexagons exhibit a Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY) predicted continuous transition between isotropic fluid and triatic, tetratic, and hexatic phases, respectively, and a continuous transition from the appropriate x -atic to the solid. In particular, we find that systems of hexagons display continuous two-step KTHNY melting. In contrast, due to
Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis
NASA Astrophysics Data System (ADS)
Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.
2005-03-01
The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..
NASA Astrophysics Data System (ADS)
Aketagawa, Masato; Honda, Hiroshi; Ishige, Masashi; Patamaporn, Chaikool
2007-02-01
A two-dimensional (2D) encoder with picometre resolution using multi-tunnelling-probes scanning tunnelling microscope (MTP-STM) as detector units and a regular crystalline lattice as a reference is proposed. In experiments to demonstrate the method, a highly oriented pyrolytic graphite (HOPG) crystal is utilized as the reference. The MTP-STM heads, which are set upon a sample stage, observe multi-points which satisfy some relationship on the HOPG crystalline surface on the sample stage, and the relative 2D displacement between the MTP-STM heads and the sample stage can be determined from the multi-current signals of the multi-points. Two unit lattice vectors on the HOPG crystalline surface with length and intersection angle of 0.246 nm and 60°, respectively, are utilized as 2D displacement references. 2D displacement of the sample stage on which the HOPG crystal is placed can be calculated using the linear sum of the two unit lattice vectors, derived from a linear operation of the multi-current signals. Displacement interpolation less than the lattice spacing of the HOPG crystal can also be performed. To determine the linear sum of the two unit vectors as the 2D displacement, the multi-points to be observed with the MTP-STM must be properly positioned according to the 2D atomic structure of the HOPG crystal. In the experiments, the proposed method is compared with a capacitance sensor whose resolution is improved to approximately 0.1 nm by limiting the sensor's bandwidth to 300 Hz. In order to obtain suitable multi-current signals of the properly positioned multi-points in semi-real-time, lateral dither modulations are applied to the STM probes. The results show that the proposed method has the capability to measure 2D lateral displacements with a resolution on the order of 10 pm with a maximum measurement speed of 100 nm s-1 or more.
Regularly configured structures with polygonal prisms for three-dimensional auxetic behaviour.
Kim, Junhyun; Shin, Dongheok; Yoo, Do-Sik; Kim, Kyoungsik
2017-06-01
We report here structures, constructed with regular polygonal prisms, that exhibit negative Poisson's ratios. In particular, we show how we can construct such a structure with regular n-gonal prism-shaped unit cells that are again built with regular n-gonal component prisms. First, we show that the only three possible values for n are 3, 4 and 6 and then discuss how we construct the unit cell again with regular n-gonal component prisms. Then, we derive Poisson's ratio formula for each of the three structures and show, by analysis and numerical verification, that the structures possess negative Poisson's ratio under certain geometric conditions.
Cosmonaut Valeriy Polyakov seen in Mir's window from Shuttle Discovery
1995-02-06
STS063-711-080 (6 Feb. 1995) --- Cosmonaut Valeriy V. Polyakov, who boarded Russia's Mir Space Station on January 8, 1994, looks out Mir's window during rendezvous operations with the Space Shuttle Discovery. This is one of 16 still photographs released by the NASA Johnson Space Center (JSC) Public Affairs Office (PAO) on February 14, 1995. Onboard the Discovery were astronauts James D. Wetherbee, mission commander; Eileen M. Collins, pilot; Bernard A. Harris, Jr., payload commander; mission specialists C. Michael Foale, Janice E. Voss, and cosmonaut Vladimir G. Titov.
From chiral quark dynamics with Polyakov loop to the hadron resonance gas model
Arriola, E. R.; Salcedo, L. L.; Megias, E.
2013-03-25
Chiral quark models with Polyakov loop at finite temperature have been often used to describe the phase transition. We show how the transition to a hadron resonance gas is realized based on the quantum and local nature of the Polyakov loop.
Kolgotin, Alexei; Müller, Detlef
2008-09-01
We present the theory of inversion with two-dimensional regularization. We use this novel method to retrieve profiles of microphysical properties of atmospheric particles from profiles of optical properties acquired with multiwavelength Raman lidar. This technique is the first attempt to the best of our knowledge, toward an operational inversion algorithm, which is strongly needed in view of multiwavelength Raman lidar networks. The new algorithm has several advantages over the inversion with so-called classical one-dimensional regularization. Extensive data postprocessing procedures, which are needed to obtain a sensible physical solution space with the classical approach, are reduced. Data analysis, which strongly depends on the experience of the operator, is put on a more objective basis. Thus, we strongly increase unsupervised data analysis. First results from simulation studies show that the new methodology in many cases outperforms our old methodology regarding accuracy of retrieved particle effective radius, and number, surface-area, and volume concentration. The real and the imaginary parts of the complex refractive index can be estimated with at least as equal accuracy as with our old method of inversion with one-dimensional regularization. However, our results on retrieval accuracy still have to be verified in a much larger simulation study.
Bai, Funing; Franchois, Ann; De Zaeytijd, Jurgen; Pižurica, Aleksandra
2013-01-01
Breast tumor detection with microwaves is based on the difference in dielectric properties between normal and malignant tissues. The complex permittivity reconstruction of inhomogeneous dielectric biological tissues from microwave scattering is a nonlinear, ill-posed inverse problem. We proposed to use the Huber regularization in our previous work where some preliminary results for piecewise constant objects were shown. In this paper, we employ the Huber function as regularization in the even more challenging 3D piecewise continuous case of a realistic numerical breast phantom. The resulting reconstructions of complex permittivity profiles indicate potential for biomedical imaging.
Ren, Jie; He, Tao; Li, Ye; Liu, Sai; Du, Yinhao; Jiang, Yu; Wu, Cen
2017-05-16
Over the past decades, the prevalence of type 2 diabetes mellitus (T2D) has been steadily increasing around the world. Despite large efforts devoted to better understand the genetic basis of the disease, the identified susceptibility loci can only account for a small portion of the T2D heritability. Some of the existing approaches proposed for the high dimensional genetic data from the T2D case-control study are limited by analyzing a few number of SNPs at a time from a large pool of SNPs, by ignoring the correlations among SNPs and by adopting inefficient selection techniques. We propose a network constrained regularization method to select important SNPs by taking the linkage disequilibrium into account. To accomodate the case control study, an iteratively reweighted least square algorithm has been developed within the coordinate descent framework where optimization of the regularized logistic loss function is performed with respect to one parameter at a time and iteratively cycle through all the parameters until convergence. In this article, a novel approach is developed to identify important SNPs more effectively through incorporating the interconnections among them in the regularized selection. A coordinate descent based iteratively reweighed least squares (IRLS) algorithm has been proposed. Both the simulation study and the analysis of the Nurses's Health Study, a case-control study of type 2 diabetes data with high dimensional SNP measurements, demonstrate the advantage of the network based approach over the competing alternatives.
Yan, Zai You; Hung, Kin Chew; Zheng, Hui
2003-05-01
Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.
Reparametrizing the Polyakov-Nambu-Jona-Lasinio model
NASA Astrophysics Data System (ADS)
Bhattacharyya, Abhijit; Ghosh, Sanjay K.; Maity, Soumitra; Raha, Sibaji; Ray, Rajarshi; Saha, Kinkar; Upadhaya, Sudipa
2017-03-01
The Polyakov-Nambu-Jona-Lasinio model has been quite successful in describing various qualitative features of observables for strongly interacting matter, that are measurable in heavy-ion collision experiments. The question still remains on the quantitative uncertainties in the model results. Such an estimation is possible only by contrasting these results with those obtained from first principles using the lattice QCD framework. Recently a variety of lattice QCD data were reported in the realistic continuum limit. Here we make a first attempt at reparametrizing the model so as to reproduce these lattice data. We find excellent quantitative agreement for the equation of state. Certain discrepancies in the charge and strangeness susceptibilities as well as baryon-charge correlation still remain. We discuss their causes and outline possible directions to remove them.
NASA Astrophysics Data System (ADS)
Souza, Leonardo A. M.; Sampaio, Marcos; Nemes, M. C.
2006-01-01
We show that the Implicit Regularization Technique is useful to display quantum symmetry breaking in a complete regularization independent fashion. Arbitrary parameters are expressed by finite differences between integrals of the same superficial degree of divergence whose value is fixed on physical grounds (symmetry requirements or phenomenology). We study Weyl fermions on a classical gravitational background in two dimensions and show that, assuming Lorentz symmetry, the Weyl and Einstein Ward identities reduce to a set of algebraic equations for the arbitrary parameters which allows us to study the Ward identities on equal footing. We conclude in a renormalization independent way that the axial part of the Einstein Ward identity is always violated. Moreover whereas we can preserve the pure tensor part of the Einstein Ward identity at the expense of violating the Weyl Ward identities we may as well violate the former and preserve the latter.
Regularization of two-dimensional supersymmetric Yang-Mills theory via non-commutative geometry
NASA Astrophysics Data System (ADS)
Valavane, K.
2000-11-01
The non-commutative geometry is a possible framework to regularize quantum field theory in a non-perturbative way. This idea is an extension of the lattice approximation by non-commutativity that allows us to preserve symmetries. The supersymmetric version is also studied and more precisely in the case of the Schwinger model on a supersphere. This paper is a generalization of this latter work to more general gauge groups.
NASA Astrophysics Data System (ADS)
Bonetti, Marco; Melnikov, Kirill; Tancredi, Lorenzo
2017-03-01
We compute the two-loop electroweak correction to the production of the Higgs boson in gluon fusion to higher orders in the dimensional-regularization parameter ε = (d - 4) / 2. We employ the method of differential equations augmented by the choice of a canonical basis to compute the relevant integrals and express them in terms of Goncharov polylogarithms. Our calculation provides useful results for the computation of the NLO mixed QCD-electroweak corrections to gg → H and establishes the necessary framework towards the calculation of the missing three-loop virtual corrections.
NASA Astrophysics Data System (ADS)
Sulyok, G.
2017-07-01
Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.
NASA Astrophysics Data System (ADS)
Roshal, D. S.; Konevtsova, O. V.; Myasnikova, A. E.; Rochal, S. B.
2016-11-01
We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.
Roshal, D S; Konevtsova, O V; Myasnikova, A E; Rochal, S B
2016-11-01
We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.
Regular phases of quasi-one-dimensional spin systems: Classification and imprints on diffraction
NASA Astrophysics Data System (ADS)
Milivojević, Marko; Lazić, Nataša; Vuković, Tatjana; Damnjanović, Milan
2015-10-01
Referring to the propulsive field of low-dimensional magnetism, the paper discusses the foundation and principles of exploitation of the spin line groups, thus establishing a base for full implementation of the symmetry in further studies of frustrated quasi-one-dimensional magnetics. The classification of the symmetry-allowed magnetic phases reveals full diversity of the complex helimagnets, tilted along a singled-out direction, as well as within the cross sections perpendicular to it. The neutron diffraction amplitudes are exhaustively calculated for all the possible spin arrangements, providing experimentally verifiable fingerprints of the particular order of the spin structures. A recent prediction of the gate-voltage-dependent phases of 13C carbon nanotubes serves as an illustration of the introduced concepts and their applications in the ground state determination.
Fast Ultrasound Beam Prediction for Linear and Regular Two-dimensional Arrays
Hlawitschka, Mario; McGough, Robert J.; Ferrara, Katherine W.; Kruse, Dustin E.
2012-01-01
Real-time beam predictions are highly desirable for the patient-specific computations required in ultrasound therapy guidance and treatment planning. To address the long-standing issue of the computational burden associated with calculating the acoustic field in large volumes, we use graphics processing unit (GPU) computing to accelerate the computation of monochromatic pressure fields for therapeutic ultrasound arrays. In our strategy, we start with acceleration of field computations for single rectangular pistons, and then we explore fast calculations for arrays of rectangular pistons. For single-piston calculations, we employ the fast near-field method (FNM) to accurately and efficiently estimate the complex near-field wave patterns for rectangular pistons in homogeneous media. The FNM is compared with the Rayleigh-Sommerfeld method (RSM) for the number of abscissas required in the respective numerical integrations to achieve 1%, 0.1%, and 0.01% accuracy in the field calculations. Next, algorithms are described for accelerated computation of beam patterns for two different ultrasound transducer arrays: regular 1-D linear arrays and regular 2-D linear arrays. For the array types considered, the algorithm is split into two parts: 1) the computation of the field from one piston, and 2) the computation of a piston-array beam pattern based on a pre-computed field from one piston. It is shown that the process of calculating an array beam pattern is equivalent to the convolution of the single-piston field with the complex weights associated with an array of pistons. Our results show that the algorithms for computing monochromatic fields from linear and regularly spaced arrays can benefit greatly from GPU computing hardware, exceeding the performance of an expensive CPU by more than 100 times using an inexpensive GPU board. For a single rectangular piston, the FNM method facilitates volumetric computations with 0.01% accuracy at rates better than 30 ns per field point
Fast ultrasound beam prediction for linear and regular two-dimensional arrays.
Hlawitschka, Mario; McGough, Robert J; Ferrara, Katherine W; Kruse, Dustin E
2011-09-01
Real-time beam predictions are highly desirable for the patient-specific computations required in ultrasound therapy guidance and treatment planning. To address the longstanding issue of the computational burden associated with calculating the acoustic field in large volumes, we use graphics processing unit (GPU) computing to accelerate the computation of monochromatic pressure fields for therapeutic ultrasound arrays. In our strategy, we start with acceleration of field computations for single rectangular pistons, and then we explore fast calculations for arrays of rectangular pistons. For single-piston calculations, we employ the fast near-field method (FNM) to accurately and efficiently estimate the complex near-field wave patterns for rectangular pistons in homogeneous media. The FNM is compared with the Rayleigh-Sommerfeld method (RSM) for the number of abscissas required in the respective numerical integrations to achieve 1%, 0.1%, and 0.01% accuracy in the field calculations. Next, algorithms are described for accelerated computation of beam patterns for two different ultrasound transducer arrays: regular 1-D linear arrays and regular 2-D linear arrays. For the array types considered, the algorithm is split into two parts: 1) the computation of the field from one piston, and 2) the computation of a piston-array beam pattern based on a pre-computed field from one piston. It is shown that the process of calculating an array beam pattern is equivalent to the convolution of the single-piston field with the complex weights associated with an array of pistons. Our results show that the algorithms for computing monochromatic fields from linear and regularly spaced arrays can benefit greatly from GPU computing hardware, exceeding the performance of an expensive CPU by more than 100 times using an inexpensive GPU board. For a single rectangular piston, the FNM method facilitates volumetric computations with 0.01% accuracy at rates better than 30 ns per field point
One-dimensional diffusion problem with not strengthened regular boundary conditions
NASA Astrophysics Data System (ADS)
Orazov, I.; Sadybekov, M. A.
2015-11-01
In this paper we consider one family of problems simulating the determination of target components and density of sources from given values of the initial and final states. The mathematical statement of these problems leads to the inverse problem for the diffusion equation, where it is required to find not only a solution of the problem, but also its right-hand side that depends only on a spatial variable. One of specific features of the considered problems is that the system of eigenfunctions of the multiple differentiation operator subject to boundary conditions does not have the basis property. We prove the existence and uniqueness of classical solutions of the problem, solving the problem independently of whether the corresponding spectral problem (for the operator of multiple differentiation with not strengthened regular boundary conditions) has a basis of generalized eigenfunctions.
Experimental investigation of thermal structures in regular three-dimensional falling films
NASA Astrophysics Data System (ADS)
Rietz, M.; Rohlfs, W.; Kneer, R.; Scheid, B.
2015-03-01
Interfacial waves on the surface of a falling liquid film are known to modify heat and mass transfer. Under non-isothermal conditions, the wave topology is strongly influenced by the presence of thermocapillary (Marangoni) forces at the interface which leads to a destabilization of the film flow and potentially to critical film thinning. In this context, the present study investigates the evolution of the surface topology and the evolution of the surface temperature for the case of regularly excited solitary-type waves on a falling liquid film under the influence of a wall-side heat flux. Combining film thickness (chromatic confocal imaging) and surface temperature information (infrared thermography), interactions between hydrodynamics and thermocapillary forces are revealed. These include the formation of rivulets, film thinning and wave number doubling in spanwise direction. Distinct thermal structures on the films' surface can be associated to characteristics of the surface topology.
Topological regularization and self-duality in four-dimensional anti-de Sitter gravity
Miskovic, Olivera; Olea, Rodrigo
2009-06-15
It is shown that the addition of a topological invariant (Gauss-Bonnet term) to the anti-de Sitter gravity action in four dimensions recovers the standard regularization given by the holographic renormalization procedure. This crucial step makes possible the inclusion of an odd parity invariant (Pontryagin term) whose coupling is fixed by demanding an asymptotic (anti) self-dual condition on the Weyl tensor. This argument allows one to find the dual point of the theory where the holographic stress tensor is related to the boundary Cotton tensor as T{sub j}{sup i}={+-}(l{sup 2}/8{pi}G)C{sub j}{sup i}, which has been observed in recent literature in solitonic solutions and hydrodynamic models. A general procedure to generate the counterterm series for anti-de Sitter gravity in any even dimension from the corresponding Euler term is also briefly discussed.
Casanova, Ramon; Whitlow, Christopher T.; Wagner, Benjamin; Williamson, Jeff; Shumaker, Sally A.; Maldjian, Joseph A.; Espeland, Mark A.
2011-01-01
In this work we use a large scale regularization approach based on penalized logistic regression to automatically classify structural MRI images (sMRI) according to cognitive status. Its performance is illustrated using sMRI data from the Alzheimer Disease Neuroimaging Initiative (ADNI) clinical database. We downloaded sMRI data from 98 subjects (49 cognitive normal and 49 patients) matched by age and sex from the ADNI website. Images were segmented and normalized using SPM8 and ANTS software packages. Classification was performed using GLMNET library implementation of penalized logistic regression based on coordinate-wise descent optimization techniques. To avoid optimistic estimates classification accuracy, sensitivity, and specificity were determined based on a combination of three-way split of the data with nested 10-fold cross-validations. One of the main features of this approach is that classification is performed based on large scale regularization. The methodology presented here was highly accurate, sensitive, and specific when automatically classifying sMRI images of cognitive normal subjects and Alzheimer disease (AD) patients. Higher levels of accuracy, sensitivity, and specificity were achieved for gray matter (GM) volume maps (85.7, 82.9, and 90%, respectively) compared to white matter volume maps (81.1, 80.6, and 82.5%, respectively). We found that GM and white matter tissues carry useful information for discriminating patients from cognitive normal subjects using sMRI brain data. Although we have demonstrated the efficacy of this voxel-wise classification method in discriminating cognitive normal subjects from AD patients, in principle it could be applied to any clinical population. PMID:22016732
A note on the dimensional regularization of the Standard Model coupled with quantum gravity
NASA Astrophysics Data System (ADS)
Anselmi, Damiano
2004-08-01
In flat space, γ5 and the epsilon tensor break the dimensionally continued Lorentz symmetry, but propagators have fully Lorentz invariant denominators. When the Standard Model is coupled with quantum gravity γ5 breaks the continued local Lorentz symmetry. I show how to deform the Einstein Lagrangian and gauge-fix the residual local Lorentz symmetry so that the propagators of the graviton, the ghosts and the BRST auxiliary fields have fully Lorentz invariant denominators. This makes the calculation of Feynman diagrams more efficient.
Akçakaya, Mehmet; Basha, Tamer A; Goddu, Beth; Goepfert, Lois A; Kissinger, Kraig V; Tarokh, Vahid; Manning, Warren J; Nezafat, Reza
2011-09-01
An improved image reconstruction method from undersampled k-space data, low-dimensional-structure self-learning and thresholding (LOST), which utilizes the structure from the underlying image is presented. A low-resolution image from the fully sampled k-space center is reconstructed to learn image patches of similar anatomical characteristics. These patches are arranged into "similarity clusters," which are subsequently processed for dealiasing and artifact removal, using underlying low-dimensional properties. The efficacy of the proposed method in scan time reduction was assessed in a pilot coronary MRI study. Initially, in a retrospective study on 10 healthy adult subjects, we evaluated retrospective undersampling and reconstruction using LOST, wavelet-based l(1)-norm minimization, and total variation compressed sensing. Quantitative measures of vessel sharpness and mean square error, and qualitative image scores were used to compare reconstruction for rates of 2, 3, and 4. Subsequently, in a prospective study, coronary MRI data were acquired using these rates, and LOST-reconstructed images were compared with an accelerated data acquisition using uniform undersampling and sensitivity encoding reconstruction. Subjective image quality and sharpness data indicate that LOST outperforms the alternative techniques for all rates. The prospective LOST yields images with superior quality compared with sensitivity encoding or l(1)-minimization compressed sensing. The proposed LOST technique greatly improves image reconstruction for accelerated coronary MRI acquisitions. Copyright © 2011 Wiley-Liss, Inc.
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
NASA Astrophysics Data System (ADS)
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-09-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network.
Three-dimensional regular arrangement of the annular ligament of the rat stapediovestibular joint.
Ohashi, Mitsuru; Ide, Soyuki; Kimitsuki, Takashi; Komune, Shizuo; Suganuma, Tatsuo
2006-03-01
The stapes footplate articulates with the vestibular window through the annular ligament. This articulation is known as the stapediovestibular joint (SVJ). We investigated the ultrastructure of adult rat SVJ and report here on the characteristic ultrastructure of the corresponding annular ligament. Transmission electron microscopy showed that this annular ligament comprises thick ligament fibers consisting of a peripheral mantle of microfibrils and an electron-lucent central amorphous substance that is regularly arranged in a linear fashion, forming laminated structures parallel to the horizontal plane of the SVJ. Scanning electron microscopy revealed that transverse microfibrils cross the thick ligament fibers, showing a lattice-like structure. The annular ligament was vividly stained with elastica van Gieson's stain and the Verhoeff's iron hematoxylin method. Staining of the electron-lucent central amorphous substance of the thick ligament fibers by the tannate-metal salt method revealed an intense electron density. These results indicate that the annular ligament of the SVJ is mainly composed of mature elastic fibers.
Hedgehog black holes and the Polyakov loop at strong coupling
NASA Astrophysics Data System (ADS)
Headrick, Matthew
2008-05-01
In N=4 super-Yang-Mills theory at large N, large λ, and finite temperature, the value of the Wilson-Maldacena loop wrapping the Euclidean time circle (the Polyakov-Maldacena loop, or PML) is computed by the area of a certain minimal surface in the dual supergravity background. This prescription can be used to calculate the free energy as a function of the PML (averaged over the spatial coordinates), by introducing into the bulk action a Lagrange multiplier term that fixes the (average) area of the appropriate minimal surface. This term, which can also be viewed as a chemical potential for the PML, contributes to the bulk stress tensor like a string stretching from the horizon to the boundary (smeared over the angular directions). We find the corresponding “hedgehog” black hole solutions numerically, within an SO(6)-preserving ansatz, and derive part of the free energy diagram for the PML. As a warm-up problem, we also find exact solutions for hedgehog black holes in pure gravity, and derive the free energy and phase diagrams for that system.
Hedgehog black holes and the Polyakov loop at strong coupling
Headrick, Matthew
2008-05-15
In N=4 super-Yang-Mills theory at large N, large {lambda}, and finite temperature, the value of the Wilson-Maldacena loop wrapping the Euclidean time circle (the Polyakov-Maldacena loop, or PML) is computed by the area of a certain minimal surface in the dual supergravity background. This prescription can be used to calculate the free energy as a function of the PML (averaged over the spatial coordinates), by introducing into the bulk action a Lagrange multiplier term that fixes the (average) area of the appropriate minimal surface. This term, which can also be viewed as a chemical potential for the PML, contributes to the bulk stress tensor like a string stretching from the horizon to the boundary (smeared over the angular directions). We find the corresponding 'hedgehog' black hole solutions numerically, within an SO(6)-preserving ansatz, and derive part of the free energy diagram for the PML. As a warm-up problem, we also find exact solutions for hedgehog black holes in pure gravity, and derive the free energy and phase diagrams for that system.
Hou, Jiayi
2015-01-01
An ordinal scale is commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical methodology based on statistical inference, in particular, ordinal modeling has contributed to the analysis of data in which the response categories are ordered and the number of covariates (p) remains smaller than the sample size (n). With the emergence of genomic technologies being increasingly applied for more accurate diagnosis and prognosis, high-dimensional data where the number of covariates (p) is much larger than the number of samples (n), are generated. To meet the emerging needs, we introduce our proposed model which is a two-stage algorithm: Extend the Generalized Monotone Incremental Forward Stagewise (GMIFS) method to the cumulative logit ordinal model; and combine the GMIFS procedure with the classical mixed-effects model for classifying disease status in disease progression along with time. We demonstrate the efficiency and accuracy of the proposed models in classification using a time-course microarray dataset collected from the Inflammation and the Host Response to Injury study. PMID:25720102
NASA Astrophysics Data System (ADS)
Botelho, Luiz C. L.
2017-02-01
We present new path integral studies on the Polyakov noncritical and Nambu-Goto critical string theories and their applications to QCD(SU(∞)) interquark potential. We also evaluate the long distance asymptotic behavior of the interquark potential on the Nambu-Goto string theory with an extrinsic term in Polyakov’s string at D →∞. We also propose an alternative and a new view to covariant Polyakov’s string path integral with a fourth-order two-dimensional quantum gravity, is an effective stringy description for QCD(SU(∞)) at the deep infrared region.
NASA Astrophysics Data System (ADS)
Ahmadzadeh, Ezat; Jaferzadeh, Keyvan; Lee, Jieun; Moon, Inkyu
2017-07-01
We present unsupervised clustering methods for automatic grouping of human red blood cells (RBCs) extracted from RBC quantitative phase images obtained by digital holographic microscopy into three RBC clusters with regular shapes, including biconcave, stomatocyte, and sphero-echinocyte. We select some good features related to the RBC profile and morphology, such as RBC average thickness, sphericity coefficient, and mean corpuscular volume, and clustering methods, including density-based spatial clustering applications with noise, k-medoids, and k-means, are applied to the set of morphological features. The clustering results of RBCs using a set of three-dimensional features are compared against a set of two-dimensional features. Our experimental results indicate that by utilizing the introduced set of features, two groups of biconcave RBCs and old RBCs (suffering from the sphero-echinocyte process) can be perfectly clustered. In addition, by increasing the number of clusters, the three RBC types can be effectively clustered in an automated unsupervised manner with high accuracy. The performance evaluation of the clustering techniques reveals that they can assist hematologists in further diagnosis.
The Polyakov loop correlator at NNLO and singlet and octet correlators
Ghiglieri, Jacopo
2011-05-23
We present the complete next-to-next-to-leading-order calculation of the correlation function of two Polyakov loops for temperatures smaller than the inverse distance between the loops and larger than the Coulomb potential. We discuss the relationship of this correlator with the singlet and octet potentials which we obtain in an Effective Field Theory framework based on finite-temperature potential Non-Relativistic QCD, showing that the Polyakov loop correlator can be re-expressed, at the leading order in a multipole expansion, as a sum of singlet and octet contributions. We also revisit the calculation of the expectation value of the Polyakov loop at next-to-next-to-leading order.
Constituent Quarks and Gluons, Polyakov loop and the Hadron Resonance Gas Model ***
NASA Astrophysics Data System (ADS)
Megías, E.; Ruiz Arriola, E.; Salcedo, L. L.
2014-03-01
Based on first principle QCD arguments, it has been argued in [1] that the vacuum expectation value of the Polyakov loop can be represented in the hadron resonance gas model. We study this within the Polyakov-constituent quark model by implementing the quantum and local nature of the Polyakov loop [2, 3]. The existence of exotic states in the spectrum is discussed. Presented by E. Megías at the International Nuclear Physics Conference INPC 2013, 2-7 June 2013, Firenze, Italy.Supported by Plan Nacional de Altas Energías (FPA2011-25948), DGI (FIS2011-24149), Junta de Andalucía grant FQM-225, Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), Spanish MINECO's Centro de Excelencia Severo Ochoa Program grant SEV-2012-0234, and the Juan de la Cierva Program.
Propagator, sewing rules, and vacuum amplitude for the Polyakov point particles with ghosts
Giannakis, I.; Ordonez, C.R.; Rubin, M.A.; Zucchini, R.
1989-01-01
The authors apply techniques developed for strings to the case of the spinless point particle. The Polyakov path integral with ghosts is used to obtain the propagator and one-loop vacuum amplitude. The propagator is shown to correspond to the Green's function for the BRST field theory in Siegel gauge. The reparametrization invariance of the Polyakov path integral is shown to lead automatically to the correct trace log result for the one-loop diagram, despite the fact that naive sewing of the ends of a propagator would give an incorrect answer. This type of failure of naive sewing is identical to that found in the string case. The present treatment provides, in the simplified context of the point particle, a pedagogical introduction to Polyakov path integral methods with and without ghosts.
Polyakov-loop suppression of colored states in a quark-meson-diquark plasma
NASA Astrophysics Data System (ADS)
Blaschke, D.; Dubinin, A.; Buballa, M.
2015-06-01
A quark-meson-diquark plasma is considered within the Polyakov-loop extended Nambu-Jona-Lasinio model for dynamical chiral symmetry breaking and restoration in quark matter. Based on a generalized Beth-Uhlenbeck approach to mesons and diquarks we present the thermodynamics of this system including the Mott dissociation of mesons and diquarks at finite temperature. A striking result is the suppression of the diquark abundance below the chiral restoration temperature by the coupling to the Polyakov loop, because of their color degree of freedom. This is understood in close analogy to the suppression of quark distributions by the same mechanism. Mesons as color singlets are unaffected by the Polyakov-loop suppression. At temperatures above the chiral restoration mesons and diquarks are both suppressed due to the Mott effect, whereby the positive resonance contribution to the pressure is largely compensated by the negative scattering contribution in accordance with the Levinson theorem.
Extensions and further applications of the nonlocal Polyakov-Nambu-Jona-Lasinio model
Hell, T.; Weise, W.; Kashiwa, K.
2011-06-01
The nonlocal Polyakov-loop-extended Nambu-Jona-Lasinio model is further improved by including momentum-dependent wave-function renormalization in the quark quasiparticle propagator. Both two- and three-flavor versions of this improved Polyakov-loop-extended Nambu-Jona-Lasinio model are discussed, the latter with inclusion of the (nonlocal) 't Hooft-Kobayashi-Maskawa determinant interaction in order to account for the axial U(1) anomaly. Thermodynamics and phases are investigated and compared with recent lattice-QCD results.
NASA Astrophysics Data System (ADS)
Aleshin, S. S.; Goriachuk, I. O.; Kataev, A. L.; Stepanyantz, K. V.
2017-01-01
At the three-loop level we analyze, how the NSVZ relation appears for N = 1 SQED regularized by the dimensional reduction. This is done by the method analogous to the one which was earlier used for the theories regularized by higher derivatives. Within the dimensional technique, the loop integrals cannot be written as integrals of double total derivatives. However, similar structures can be written in the considered approximation and are taken as a starting point. Then we demonstrate that, unlike the higher derivative regularization, the NSVZ relation is not valid for the renormalization group functions defined in terms of the bare coupling constant. However, for the renormalization group functions defined in terms of the renormalized coupling constant, it is possible to impose boundary conditions to the renormalization constants giving the NSVZ scheme in the three-loop order. They are similar to the all-loop ones defining the NSVZ scheme obtained with the higher derivative regularization, but are more complicated. The NSVZ schemes constructed with the dimensional reduction and with the higher derivative regularization are related by a finite renormalization in the considered approximation.
A recursive method to calculate UV-divergent parts at one-loop level in dimensional regularization
NASA Astrophysics Data System (ADS)
Feng, Feng
2012-07-01
A method is introduced to calculate the UV-divergent parts at one-loop level in dimensional regularization. The method is based on the recursion, and the basic integrals are just the scaleless integrals after the recursive reduction, which involve no other momentum scales except the loop momentum itself. The method can be easily implemented in any symbolic computer language, and a implementation in MATHEMATICA is ready to use. Catalogue identifier: AELY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 361 No. of bytes in distributed program, including test data, etc.: 412 084 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer where the Mathematica is running. Operating system: Any capable of running Mathematica. Classification: 11.1 External routines: FeynCalc (http://www.feyncalc.org/), FeynArts (http://www.feynarts.de/) Nature of problem: To get the UV-divergent part of any one-loop expression. Solution method: UVPart is a Mathematica package where the recursive method has been implemented. Running time: In general it is below one second.
NASA Astrophysics Data System (ADS)
Altaç, Zekeriya; Sert, Zerrin
2017-01-01
Alternative synthetic kernel (ASKN) approximation, just as the standard SKN method, is derived from the radiative integral transfer equations in full 3D generality. The direct and diffuse terms of thermal radiation appear explicitly in the radiative integral transfer equations as surface and volume integrals, respectively. In standard SKN method, the approximation is employed to the diffuse terms while direct terms are evaluated analytically. The alternative formulation differs from the standard one in that the direct radiation wall contributions are also approximated with the same spirit of the synthetic kernel approximation. This alternative formulation also yields a set of coupled partial differential-the ASKN-equations which could be solved using finite volume methods. This approximation is applied to radiative transfer calculations in regular and irregular two-dimensional absorbing, emitting and isotropically scattering media. Four benchmark problems-one rectangular and three irregular media-are considered, and the net radiative flux and/or incident energy solutions along the boundaries are compared with available exact, standard discrete ordinates S4 and S12, modified discrete ordinates S4, Monte Carlo and collocation spectral method to assess the accuracy of the method. The ASKN approximation yields ray effect free incident energy and radiative flux distributions, and low order ASKN solutions are generally better than those of the high order standard discrete ordinates method.
NASA Astrophysics Data System (ADS)
Gibbon, John D.; Pal, Nairita; Gupta, Anupam; Pandit, Rahul
2016-12-01
We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984), 10.1007/BF01212349]. By taking an L∞ norm of the energy of the full binary system, designated as E∞, we have shown that ∫0tE∞(τ ) d τ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 1283 to 5123 collocation points and over the duration of our DNSs confirm that E∞ remains bounded as far as our computations allow.
Gibbon, John D; Pal, Nairita; Gupta, Anupam; Pandit, Rahul
2016-12-01
We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984)CMPHAY0010-361610.1007/BF01212349]. By taking an L^{∞} norm of the energy of the full binary system, designated as E_{∞}, we have shown that ∫_{0}^{t}E_{∞}(τ)dτ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 128^{3} to 512^{3} collocation points and over the duration of our DNSs confirm that E_{∞} remains bounded as far as our computations allow.
NASA Astrophysics Data System (ADS)
Andreev, V. B.
2015-01-01
The first boundary value problem for a one-dimensional singularly perturbed convection-diffusion equation with variable coefficients on a finite interval is considered. For the regular component of the solution, unimprovable a priori estimates in the Hölder norms are obtained. The estimates are unimprovable in the sense that they fail on any weakening of the estimating norm.
NASA Astrophysics Data System (ADS)
Lorquet, J. C.
2017-04-01
energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.
Color superconductivity in the Nambu-Jona-Lasinio model complemented by a Polyakov loop
NASA Astrophysics Data System (ADS)
Blanquier, Eric
2017-06-01
The color superconductivity is studied with the Nambu and Jona-Lasinio (NJL) model. This one is coupled to a Polyakov loop, to form the PNJL model. A μ-dependent Polyakov loop potential is also considered (μ PNJL model). An objective is to detail the analytical calculations that lead to the equations to be solved, in all of the treated cases. They are the normal quark (NQ), 2-flavor color-superconducting (2SC) and color-flavor-locked (CFL) phases, in an SU(3)f× SU(3)c description. The calculations are performed according to the temperature T , the chemical potentials μf or the densities ρf, with or without the isospin symmetry. The relation between the μf and ρf results is studied. The influence of the color superconductivity and the Polyakov loop on the found data is analyzed. A triple coincidence is observed at low T between the chiral restoration, the deconfinement transition described by the Polyakov loop and the NQ/2SC phase transition. Furthermore, an sSC phase is identified in the ρq, ρs plane. Possible links between certain of the obtained results and physical systems are pointed out.
NASA Astrophysics Data System (ADS)
Montalbán, A.; Velasco, V. R.; Tutor, J.; Fernández-Velicia, F. J.
2007-06-01
We have studied the vibrational frequencies and atom displacements of one-dimensional systems formed by combinations of Thue-Morse and Rudin-Shapiro quasi-regular stackings with periodic ones. The materials are described by nearest-neighbor force constants and the corresponding atom masses. These systems exhibit differences in the frequency spectrum as compared to the original simple quasi-regular generations and periodic structures. The most important feature is the presence of separate confinement of the atom displacements in one of the parts forming the total composite structure for different frequency ranges, thus acting as a kind of phononic cavity.
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J
2015-01-01
Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and
NASA Astrophysics Data System (ADS)
Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T.; Cooper, Benjamin J.; Kuncic, Zdenka; Keall, Paul J.
2015-01-01
Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and did
Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.
Andreev, Oleg
2009-05-29
We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range.
NASA Astrophysics Data System (ADS)
Cappa, G.; Ferrari, S.
2016-12-01
Let X be a separable Banach space endowed with a non-degenerate centered Gaussian measure μ. The associated Cameron-Martin space is denoted by H. Let ν =e-U μ, where U : X → R is a sufficiently regular convex and continuous function. In this paper we are interested in the W 2 , 2 regularity of the weak solutions of elliptic equations of the type
NASA Astrophysics Data System (ADS)
Critelli, Renato; Rougemont, Romulo; Finazzo, Stefano I.; Noronha, Jorge
2016-12-01
We investigate the temperature and magnetic field dependence of the Polyakov loop and heavy quark entropy in a bottom-up Einstein-Maxwell-dilaton (EMD) holographic model for the strongly coupled quark-gluon plasma that quantitatively matches lattice data for the (2 +1 )-flavor QCD equation of state at finite magnetic field and physical quark masses. We compare the holographic EMD model results for the Polyakov loop at zero and nonzero magnetic fields and the heavy quark entropy at vanishing magnetic field with the latest lattice data available for these observables and find good agreement for temperatures T ≳150 MeV and magnetic fields e B ≲1 GeV2 . Predictions for the behavior of the heavy quark entropy at nonzero magnetic fields are made that could be readily tested on the lattice.
Thermodynamics of a three-flavor nonlocal Polyakov-Nambu-Jona-Lasinio model
Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.
2010-04-01
The present work generalizes a nonlocal version of the Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL) model to the case of three active quark flavors, with inclusion of the axial U(1) anomaly. Gluon dynamics is incorporated through a gluonic background field, expressed in terms of the Polyakov loop. The thermodynamics of the nonlocal PNJL model accounts for both chiral and deconfinement transitions. Our results obtained in mean-field approximation are compared to lattice QCD results for N{sub f}=2+1 quark flavors. Additional pionic and kaonic contributions to the pressure are calculated in random phase approximation. Finally, this nonlocal three-flavor PNJL model is applied to the finite density region of the QCD phase diagram. It is confirmed that the existence and location of a critical point in this phase diagram depend sensitively on the strength of the axial U(1) breaking interaction.
Hydrodynamics of the Polyakov line in SU(Nc) Yang-Mills
Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail
2015-12-08
We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite Nc for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of Nc, and are consistent with the string model results at Nc = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out of equilibrium is capturedmore » by a hydrodynamical instanton. An estimate of the probability of formation of a Z(Nc)bubble using a piece-wise sound wave is suggested.« less
Physical properties of Polyakov loop geometrical clusters in SU(2) gluodynamics
NASA Astrophysics Data System (ADS)
Ivanytskyi, A. I.; Bugaev, K. A.; Nikonov, E. G.; Ilgenfritz, E.-M.; Oliinychenko, D. R.; Sagun, V. V.; Mishustin, I. N.; Petrov, V. K.; Zinovjev, G. M.
2017-04-01
We apply the liquid droplet model to describe the clustering phenomenon in SU(2) gluodynamics, especially, in the vicinity of the deconfinement phase transition. In particular, we analyze the size distributions of clusters formed by the Polyakov loops of the same sign. Within such an approach this phase transition can be considered as the transition between two types of liquids where one of the liquids (the largest droplet of a certain Polyakov loop sign) experiences a condensation, while the other one (the next to largest droplet of opposite Polyakov loop sign) evaporates. The clusters of smaller sizes form two accompanying gases, and their size distributions are described by the liquid droplet parameterization. By fitting the lattice data we have extracted the value of Fisher exponent τ = 1.806 ± 0.008. Also we found that the temperature dependences of the surface tension of both gaseous clusters are entirely different below and above the phase transition and, hence, they can serve as an order parameter. The critical exponents of the surface tension coefficient in the vicinity of the phase transition are found. Our analysis shows that the temperature dependence of the surface tension coefficient above the critical temperature has a T2 behavior in one gas of clusters and T4 in the other one.
Finding the effective Polyakov line action for SU(3) gauge theories at finite chemical potential
NASA Astrophysics Data System (ADS)
Greensite, Jeff; Langfeld, Kurt
2014-07-01
Motivated by the sign problem, we calculate the effective Polyakov line action corresponding to certain SU(3) lattice gauge theories on a 163×6 lattice via the "relative weights" method introduced in our previous papers. The calculation is carried out at β =5.6, 5.7 for the pure gauge theory and at β=5.6 for the gauge field coupled to a relatively light scalar particle. In the latter example we determine the effective theory also at finite chemical potential and show how observables relevant to phase structure can be computed in the effective theory via mean field methods. In all cases a comparison of Polyakov line correlators in the effective theory and the underlying lattice gauge theory, computed numerically at zero chemical potential, shows accurate agreement down to correlator magnitudes of order 10-5. We also derive the effective Polyakov line action corresponding to a gauge theory with heavy quarks and large chemical potential and apply mean field methods to extract observables.
SU(3) Polyakov linear-σ model in an external magnetic field
NASA Astrophysics Data System (ADS)
Tawfik, Abdel Nasser; Magdy, Niseem
2014-07-01
In the present work, we analyze the effects of an external magnetic field on the chiral critical temperature Tc of strongly interacting matter. In doing this, we can characterize the magnetic properties of the quantum chromodynamics (QCD) strongly interacting matter, the quark-gluon plasma (QGP). We investigate this in the framework of the SU(3) Polyakov linear sigma model (PLSM). To this end, we implement two approaches representing two systems, in which the Polyakov-loop potential added to PLSM is either renormalized or non-normalized. The effects of Landau quantization on the strongly interacting matter are conjectured to reduce the electromagnetic interactions between quarks. In this case, the color interactions will be dominant and increasing, which in turn can be achieved by increasing the Polyakov-loop fields. Obviously, each of them equips us with a different understanding about the critical temperature under the effect of an external magnetic field. In both systems, we obtain a paramagnetic response. In one system, we find that Tc increases with increasing magnetic field. In the other one, Tc significantly decreases with increasing magnetic field.
NASA Astrophysics Data System (ADS)
Aleshin, S. S.; Kataev, A. L.; Stepanyantz, K. V.
2016-01-01
In the case of using the higher derivative regularization for N = 1 supersymmetric quantum electrodynamics (SQED) with N f flavors, the loop integrals giving the β-function are integrals of double total derivatives in themomentum space. This feature allows reducing one of the loop integrals to an integral of the δ-function and deriving the Novikov-Shifman-Vainshtein-Zakharov relation for the renormalization group functions defined in terms of the bare coupling constant. We consider N = 1 SQED with N f flavors regularized by the dimensional reduction in the overline {DR} -scheme. Evaluating the scheme-dependent three-loop contribution to the β-function proportional to ( N f)2 we find the structures analogous to integrals of the δ-singularities. After adding the schemeindependent terms proportional to ( N f)1, we obtain the known result for the three-loop β-function.
NASA Astrophysics Data System (ADS)
Abuki, H.; Ciminale, M.; Gatto, R.; Nardulli, G.; Ruggieri, M.
2008-04-01
We study how the charge neutrality affects the phase structure of the three-flavor Polyakov-loop Nambu Jona-Lasinio (PNJL) model. We point out that, within the conventional PNJL model at finite density, the color neutrality is missing because the Wilson line serves as an external colored field coupled to dynamical quarks. In this paper we heuristically assume that the model may still be applicable. To get color neutrality, one has then to allow nonvanishing color chemical potentials. We study how the quark matter phase diagram in (T,ms2/μ)-plane is affected by imposing neutrality and by including the Polyakov-loop dynamics. Although these two effects are correlated in a nonlinear way, the impact of the Polyakov loop turns out to be significant in the T direction, while imposing neutrality brings a remarkable effect in the ms2/μ direction. In particular, we find a novel unlocking transition, when the temperature is increased, even in the chiral SU(3) limit. We clarify how and why this is possible once the dynamics of the colored Polyakov loop is taken into account. Also we succeed in giving an analytic expression for Tc for the transition from two-flavor pairing (2SC) to unpaired quark matter in the presence of the Polyakov loop.
NASA Astrophysics Data System (ADS)
Moradi, Hamid; Honarvar, Mohammad; Tang, Shuo; Salcudean, Septimiu E.
2017-03-01
Iterative image reconstruction algorithms have the potential to reduce the computational time required for photoacoustic tomography (PAT). An iterative deconvolution-based photoacoustic reconstruction with sparsity regularization (iDPARS) is presented which enables us to solve large-scale problems. The method deals with the limited angle of view and the directivity effects associated with clinically relevant photoacoustic tomography imaging with conventional ultrasound transducers. Our Graphics Processing Unit (GPU) implementation is able to reconstruct large 3-D volumes (100×100×100) in less than 10 minutes. The simulation and experimental results demonstrate iDPARS provides better images than DAS in terms of contrast-to-noise ratio and Root-Mean-Square errors.
Marc O Delchini; Jean E. Ragusa; Ray A. Berry
2015-07-01
We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.
NASA Astrophysics Data System (ADS)
Costa, P.; Ruivo, M. C.; de Sousa, C. A.; Hansen, H.; Alberico, W. M.
2009-06-01
The modification of mesonic observables in a hot medium is analyzed as a tool to investigate the restoration of chiral and axial symmetries in the context of the Polyakov-loop extended Nambu-Jona-Lasinio model. The results of the extended model lead to the conclusion that the effects of the Polyakov loop are fundamental for reproducing lattice findings. In particular, the partial restoration of the chiral symmetry is faster in the Polyakov-Nambu-Jona-Lasinio model than in the Nambu-Jona-Lasinio one, and it is responsible for several effects: the meson-quark coupling constants show a remarkable difference in both models, there is a faster tendency to recover the Okubo-Zweig-Iizuka rule, and finally the topological susceptibility nicely reproduces the lattice results around T/Tc≈1.0.
NASA Astrophysics Data System (ADS)
Trabelsi, Youssef; Benali, Naim; Bouazzi, Yassine; Kanzari, Mounir
2013-09-01
The transmission properties of hybrid quasi-periodic photonic systems (HQPS) made by the combination of one-dimensional periodic photonic crystals (PPCs) and quasi-periodic photonic crystals (QPCs) were theoretically studied. The hybrid quasi-periodic photonic lattice based on the hetero-structures was built from the Fibonacci and Thue-Morse sequences. We addressed the microwave properties of waves through the one-dimensional symmetric Fibonacci, and Thue-Morse system i.e., a quasi-periodic structure was made up of two different dielectric materials (Rogers and air), in the quarter wavelength condition. It shows that controlling the Fibonacci parameters permits to obtain selective optical filters with the narrow passband and polychromatic stop band filters with varied properties which can be controlled as desired. From the results, we presented the self-similar features of the spectra, and we also presented the fractal process through a return map of the transmission coefficients. We extracted powerfully the band gaps of hybrid quasi-periodic multilayered structures, called "pseudo band gaps", often containing resonant states, which could be considered as a manifestation of numerous defects distributed along the structure. The results of transmittance spectra showed that the cutoff frequency could be manipulated through the thicknesses of the defects and the type of dielectric layers of the system. Taken together, the above two properties provide favorable conditions for the design of an all-microwave intermediate reflector.
Sun, Hokeun; Wang, Shuang
2013-05-30
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.
Dynamics and thermodynamics of a nonlocal Polyakov--Nambu--Jona-Lasinio model with running coupling
Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.
2009-01-01
A nonlocal covariant extension of the two-flavor Nambu and Jona-Lasinio model is constructed, with built-in constraints from the running coupling of QCD at high-momentum and instanton physics at low-momentum scales. Chiral low-energy theorems and basic current algebra relations involving pion properties are shown to be reproduced. The momentum-dependent dynamical quark mass derived from this approach is in agreement with results from Dyson-Schwinger equations and lattice QCD. At finite temperature, inclusion of the Polyakov loop and its gauge invariant coupling to quarks reproduces the dynamical entanglement of the chiral and deconfinement crossover transitions as in the (local) Polyakov-loop-extended Nambu and Jona-Lasinio model, but now without the requirement of introducing an artificial momentum cutoff. Steps beyond the mean-field approximation are made including mesonic correlations through quark-antiquark ring summations. Various quantities of interest (pressure, energy density, speed of sound, etc.) are calculated and discussed in comparison with lattice QCD thermodynamics at zero chemical potential. The extension to finite quark chemical potential and the phase diagram in the (T,{mu})-plane are also discussed.
NASA Astrophysics Data System (ADS)
Sereno, Mauro; Ettori, Stefano; Meneghetti, Massimo; Sayers, Jack; Umetsu, Keiichi; Merten, Julian; Chiu, I.-Non; Zitrin, Adi
2017-06-01
Multi-wavelength techniques can probe the distribution and the physical properties of baryons and dark matter in galaxy clusters from the inner regions out to the peripheries. We present a full three-dimensional analysis combining strong and weak lensing, X-ray surface brightness and temperature, and the Sunyaev-Zel'dovich effect. The method is applied to MACS J1206.2-0847, a remarkably regular, face-on, massive, M200 = (1.1 ± 0.2) × 1015 M⊙ h-1, cluster at z = 0.44. The measured concentration, c200 = 6.3 ± 1.2, and the triaxial shape are common to haloes formed in a Λ cold dark matter scenario. The gas has settled in and follows the shape of the gravitational potential, which is evidence of pressure equilibrium via the shape theorem. There is no evidence for significant non-thermal pressure and the equilibrium is hydrostatic.
Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong
2014-03-07
Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.
Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong
2014-01-01
Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964
NASA Astrophysics Data System (ADS)
Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong
2014-03-01
Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.
Color neutrality effects in the phase diagram of the Polyakov--Nambu--Jona-Lasinio model
Dumm, D. Gomez; Blaschke, D. B.; Grunfeld, A. G.; Scoccola, N. N.
2008-12-01
The phase diagram of a two-flavor Polyakov-loop Nambu-Jona-Lasinio model is analyzed imposing the constraint of color charge neutrality. The main effect of this constraint is a coexistence of the chiral symmetry breaking ({chi}SB) and two-flavor superconducting phases. Additional effects are a shrinking of the {chi}SB domain in the T-{mu} plane and a shift of the end point to lower temperatures, but their quantitative importance is shadowed by the intrinsic uncertainties of the model. The effects can be understood in view of the presence of a nonvanishing color chemical potential {mu}{sub 8}, which is introduced to compensate the color charge density {rho}{sub 8} induced by a background color gauge mean field {phi}{sub 3}. At low temperatures and large chemical potentials the model exhibits a quarkyonic phase, which gets additional support from the diquark condensation.
Comparison between the continuum threshold and the Polyakov loop as deconfinement order parameters
NASA Astrophysics Data System (ADS)
Carlomagno, J. P.; Loewe, M.
2017-02-01
We study the relation between the continuum threshold s0 within finite energy sum rules and the trace of the Polyakov loop Φ in the framework of a nonlocal SU(2) chiral quark model, establishing a contact between both deconfinement order parameters at finite temperature T and chemical potential μ . In our analysis, we also include the order parameter for the chiral symmetry restoration, the chiral quark condensate. We found that s0 and Φ provide us with the same information for the deconfinement transition, both for the zero and finite chemical potential cases. At zero density, the critical temperatures for both quantities coincide exactly and, at finite μ both order parameters provide evidence for the appearance of a quarkyonic phase.
Finite temperature and the Polyakov loop in the covariant variational approach to Yang-Mills Theory
NASA Astrophysics Data System (ADS)
Quandt, Markus; Reinhardt, Hugo
2017-03-01
We extend the covariant variational approach for Yang-Mills theory in Landau gauge to non-zero temperatures. Numerical solutions for the thermal propagators are presented and compared to high-precision lattice data. To study the deconfinement phase transition, we adapt the formalism to background gauge and compute the effective action of the Polyakov loop for the colour groups SU(2) and SU(3). Using the zero-temperature propagators as input, all parameters are fixed at T = 0 and we find a clear signal for a deconfinement phase transition at finite temperatures, which is second order for SU(2) and first order for SU(3). The critical temperatures obtained are in reasonable agreement with lattice data.
Covariant variational approach to Yang-Mills theory: Effective potential of the Polyakov loop
NASA Astrophysics Data System (ADS)
Quandt, M.; Reinhardt, H.
2016-09-01
We compute the effective action of the Polyakov loop in S U (2 ) and S U (3 ) Yang-Mills theory using a previously developed covariant variational approach. The formalism is extended to background gauge and it is shown how to relate the low-order Green's functions to the ones in Landau gauge studied earlier. The renormalization procedure is discussed. The self-consistent effective action is derived and evaluated using the numerical solution of the gap equation. We find a clear signal for a deconfinement phase transition at finite temperatures, which is second order for S U (2 ) and first order for S U (3 ). The critical temperatures obtained are in reasonable agreement with high-precision lattice data.
Polyakov line actions from SU(3) lattice gauge theory with dynamical fermions via relative weights
NASA Astrophysics Data System (ADS)
Höllwieser, Roman; Greensite, Jeff
2017-03-01
We extract an effective Polyakov line action from an underlying SU(3) lattice gauge theory with dynamical fermions via the relative weights method. The centersymmetry breaking terms in the effective theory are fit to a form suggested by effective action of heavy-dense quarks, and the effective action is solved at finite chemical potential by a mean field approach. We show results for a small sample of lattice couplings, lattice actions, and lattice extensions in the time direction. We find in some instances that the long-range couplings in the effective action are very important to the phase structure, and that these couplings are responsible for long-lived metastable states in the effective theory. Only one of these states corresponds to the underlying lattice gauge theory.
Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Rodrigues, Renata Cristina; Ribeiro, Ricardo Faria; de Mattos, Maria G
The aim of this study was to compare the biomechanical performance of splinted or nonsplinted prostheses over short- or regular-length Morse taper implants (5 mm and 11 mm, respectively) in the posterior area of the mandible using finite element analysis. Three-dimensional geometric models of regular implants (Ø 4 × 11 mm) and short implants (Ø 4 × 5 mm) were placed into a simulated model of the left posterior mandible that included the first premolar tooth; all teeth posterior to this tooth had been removed. The four experimental groups were as follows: regular group SP (three regular implants were rehabilitated with splinted prostheses), regular group NSP (three regular implants were rehabilitated with nonsplinted prostheses), short group SP (three short implants were rehabilitated with splinted prostheses), and short group NSP (three short implants were rehabilitated with nonsplinted prostheses). Oblique forces were simulated in molars (365 N) and premolars (200 N). Qualitative and quantitative analyses of the minimum principal stress in bone were performed using ANSYS Workbench software, version 10.0. The use of splinting in the short group reduced the stress to the bone surrounding the implants and tooth. The use of NSP or SP in the regular group resulted in similar stresses. The best indication when there are short implants is to use SP. Use of NSP is feasible only when regular implants are present.
The consequences of SU (3) colorsingletness, Polyakov Loop and Z (3) symmetry on a quark-gluon gas
NASA Astrophysics Data System (ADS)
Aminul Islam, Chowdhury; Abir, Raktim; Mustafa, Munshi G.; Ray, Rajarshi; Ghosh, Sanjay K.
2014-02-01
Based on quantum statistical mechanics, we show that the SU(3) color singlet ensemble of a quark-gluon gas exhibits a Z(3) symmetry through the normalized character in fundamental representation and also becomes equivalent, within a stationary point approximation, to the ensemble given by Polyakov Loop. In addition, a Polyakov Loop gauge potential is obtained by considering spatial gluons along with the invariant Haar measure at each space point. The probability of the normalized character in SU(3) vis-a-vis a Polyakov Loop is found to be maximum at a particular value, exhibiting a strong color correlation. This clearly indicates a transition from a color correlated to an uncorrelated phase, or vice versa. When quarks are included in the gauge fields, a metastable state appears in the temperature range 145 ⩽ T(MeV) ⩽ 170 due to the explicit Z(3) symmetry breaking in the quark-gluon system. Beyond T ⩾ 170 MeV, the metastable state disappears and stable domains appear. At low temperatures, a dynamical recombination of ionized Z(3) color charges to a color singlet Z(3) confined phase is evident, along with a confining background that originates due to the circulation of two virtual spatial gluons, but with conjugate Z(3) phases in a closed loop. We also discuss other possible consequences of the center domains in the color deconfined phase at high temperatures. Communicated by Steffen Bass
NASA Astrophysics Data System (ADS)
Lusso, Christelle; Ern, Alexandre; Bouchut, François; Mangeney, Anne; Farin, Maxime; Roche, Olivier
2017-03-01
This work is devoted to numerical modeling and simulation of granular flows relevant to geophysical flows such as avalanches and debris flows. We consider an incompressible viscoplastic fluid, described by a rheology with pressure-dependent yield stress, in a 2D setting with a free surface. We implement a regularization method to deal with the singularity of the rheological law, using a mixed finite element approximation of the momentum and incompressibility equations, and an arbitrary Lagrangian Eulerian (ALE) formulation for the displacement of the domain. The free surface is evolved by taking care of its deposition onto the bottom and of preventing it from folding over itself. Several tests are performed to assess the efficiency of our method. The first test is dedicated to verify its accuracy and cost on a one-dimensional simple shear plug flow. On this configuration we setup rules for the choice of the numerical parameters. The second test aims to compare the results of our numerical method to those predicted by an augmented Lagrangian formulation in the case of the collapse and spreading of a granular column over a horizontal rigid bed. Finally we show the reliability of our method by comparing numerical predictions to data from experiments of granular collapse of both trapezoidal and rectangular columns over horizontal rigid or erodible granular bed made of the same material. We compare the evolution of the free surface, the velocity profiles, and the static-flowing interface. The results show the ability of our method to deal numerically with the front behavior of granular collapses over an erodible bed.
Trajectory optimization using regularized variables
NASA Technical Reports Server (NTRS)
Lewallen, J. M.; Szebehely, V.; Tapley, B. D.
1969-01-01
Regularized equations for a particular optimal trajectory are compared with unregularized equations with respect to computational characteristics, using perturbation type numerical optimization. In the case of the three dimensional, low thrust, Earth-Jupiter rendezvous, the regularized equations yield a significant reduction in computer time.
Topological Symmetry, Spin Liquids and CFT Duals of Polyakov Model with Massless Fermions
Unsal, Mithat
2008-04-30
We prove the absence of a mass gap and confinement in the Polyakov model with massless complex fermions in any representation of the gauge group. A U(1){sub *} topological shift symmetry protects the masslessness of one dual photon. This symmetry emerges in the IR as a consequence of the Callias index theorem and abelian duality. For matter in the fundamental representation, the infrared limits of this class of theories interpolate between weakly and strongly coupled conformal field theory (CFT) depending on the number of flavors, and provide an infinite class of CFTs in d = 3 dimensions. The long distance physics of the model is same as certain stable spin liquids. Altering the topology of the adjoint Higgs field by turning it into a compact scalar does not change the long distance dynamics in perturbation theory, however, non-perturbative effects lead to a mass gap for the gauge fluctuations. This provides conceptual clarity to many subtle issues about compact QED{sub 3} discussed in the context of quantum magnets, spin liquids and phase fluctuation models in cuprate superconductors. These constructions also provide new insights into zero temperature gauge theory dynamics on R{sup 2,1} and R{sup 2,1} x S{sup 1}. The confined versus deconfined long distance dynamics is characterized by a discrete versus continuous topological symmetry.
Contrera, G. A.; Dumm, D. Gomez; Scoccola, Norberto N.
2010-03-01
We study the finite temperature behavior of light scalar and pseudoscalar meson properties in the context of a three-flavor nonlocal chiral quark model. The model includes mixing with active strangeness degrees of freedom, and takes care of the effect of gauge interactions by coupling the quarks with the Polyakov loop. We analyze the chiral restoration and deconfinement transitions, as well as the temperature dependence of meson masses, mixing angles and decay constants. The critical temperature is found to be T{sub c{approx_equal}}202 MeV, in better agreement with lattice results than the value recently obtained in the local SU(3) PNJL model. It is seen that above T{sub c} pseudoscalar meson masses get increased, becoming degenerate with the masses of their chiral partners. The temperatures at which this matching occurs depend on the strange quark composition of the corresponding mesons. The topological susceptibility shows a sharp decrease after the chiral transition, signalling the vanishing of the U(1){sub A} anomaly for large temperatures.
Cristoforetti, M.; Hell, T.; Klein, B.; Weise, W.
2010-06-01
The Monte-Carlo method is applied to the Polyakov-loop extended Nambu-Jona-Lasinio model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor nondiagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.
Rotations of the Regular Polyhedra
ERIC Educational Resources Information Center
Jones, MaryClara; Soto-Johnson, Hortensia
2006-01-01
The study of the rotational symmetries of the regular polyhedra is important in the classroom for many reasons. Besides giving the students an opportunity to visualize in three dimensions, it is also an opportunity to relate two-dimensional and three-dimensional concepts. For example, rotations in R[superscript 2] require a point and an angle of…
Contrera, G. A.; Orsaria, M.; Scoccola, N. N.
2010-09-01
We study the phase diagram of strongly interacting matter in the framework of a nonlocal SU(2) chiral quark model which includes wave function renormalization and coupling to the Polyakov loop. Both nonlocal interactions based on the frequently used exponential form factor, and on fits to the quark mass and renormalization functions obtained in lattice calculations are considered. Special attention is paid to the determination of the critical points, both in the chiral limit and at finite quark mass. In particular, we study the position of the critical end point as well as the value of the associated critical exponents for different model parametrizations.
NASA Astrophysics Data System (ADS)
Borsányi, Szabolcs; Fodor, Zoltán; Katz, Sándor D.; Pásztor, Attila; Szabó, Kálmán K.; Török, Csaba
2015-04-01
We study the correlators of Polyakov loops, and the corresponding gauge invariant free energy of a static quark-antiquark pair in 2+1 flavor QCD at finite temperature. Our simulations were carried out on N t = 6 , 8 , 10 , 12 , 16 lattices using a Symanzik improved gauge action and a stout improved staggered action with physical quark masses. The free energies calculated from the Polyakov loop correlators are extrapolated to the continuum limit. For the free energies we use a two step renormalization procedure that only uses data at finite temperature. We also measure correlators with definite Euclidean time reversal and charge conjugation symmetry to extract two different screening masses, one in the magnetic, and one in the electric sector, to distinguish two different correlation lengths in the full Polyakov loop correlator.
Cluster algorithm for two-dimensional U(1) lattice gauge theory
NASA Astrophysics Data System (ADS)
Sinclair, R.
1992-03-01
We use gauge fixing to rewrite the two-dimensional U(1) pure gauge model with Wilson action and periodic boundary conditions as a nonfrustrated XY model on a closed chain. The Wolff single-cluster algorithm is then applied, eliminating critical slowing down of topological modes and Polyakov loops.
NASA Astrophysics Data System (ADS)
Powell, Philip D.; Baym, Gordon
2013-07-01
We investigate the effects of realistic quark masses and local color neutrality on quark pairing in the three-flavor Polyakov-Nambu-Jona-Lasinio model. While prior studies have indicated the presence of light flavor quark (2SC) or symmetric color-flavor-locked (CFL) pairing at low temperatures, we find that in the absence of a local color neutrality constraint the inclusion of the Polyakov loop gives rise to phases in which all quark colors and flavors pair, but with unequal magnitudes. We study this asymmetric color-flavor-locked (ACFL) phase, which can exist even for equal mass quarks, identifying its location in the phase diagram, the order of the associated phase transitions, and its symmetry breaking pattern, which proves to be the intersection of the symmetry groups of the 2SC and CFL phases. We also investigate the effects of the strange quark mass on this new phase and the QCD phase diagram generally. Finally, we analyze the effect of a local color neutrality constraint on these phases of asymmetric pairing. We observe that for massless quarks the neutrality constraint renders the 2SC phase energetically unfavorable, eliminating it at low temperatures, and giving rise to the previously proposed low temperature critical point, with associated continuity between the hadronic and ACFL phases. For realistic strange quark masses, however, the neutrality constraint shrinks the 2SC region of the phase diagram, but does not eliminate it, at T=0.
Dias, W S; Bertrand, D; Lyra, M L
2017-06-01
Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d>4.
NASA Astrophysics Data System (ADS)
Dias, W. S.; Bertrand, D.; Lyra, M. L.
2017-06-01
Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d >4 .
NASA Astrophysics Data System (ADS)
Saha, Kinkar; Upadhaya, Sudipa; Ghosh, Sabyasachi
2017-02-01
We have gone through a comparative study on two different kinds of bulk viscosity expressions by using a common dynamical model. The Polyakov-Nambu-Jona-Lasinio (PNJL) model in the realm of mean-field approximation, including up to eight quark interactions for 2+1 flavor quark matter, is treated for this common dynamics. We have probed the numerical equivalence as well as discrepancy of two different expressions for bulk viscosity at vanishing quark chemical potential. Our estimation of bulk viscosity to entropy density ratio follows a decreasing trend with temperature, which is observed in most of the earlier investigations. We have also extended our estimation for finite values of quark chemical potential.
NASA Astrophysics Data System (ADS)
Tawfik, A.; Magdy, N.; Diab, A.
2014-05-01
In order to characterize the higher-order moments of particle multiplicity, we implement the linear-sigma model with Polyakov-loop correction. We first studied the critical phenomena and estimated some thermodynamic quantities. Then, we compared all these results with first-principle lattice QCD calculations. Then, the extensive study of non-normalized four-moments is followed by investigating their thermal and density dependences. We repeat this for moments normalized to temperature and chemical potential. The fluctuations of the second-order moment are used to estimate the chiral phase transition. Then, we implement all these in mapping out the chiral phase transition, which is compared with the freeze-out parameters estimated from the lattice QCD simulations, and the thermal models are compared with the chiral phase diagram.
NASA Astrophysics Data System (ADS)
Brusseau, Elisabeth; Detti, Valérie; Coulon, Agnès; Maissiat, Emmanuèle; Boublay, Nawèle; Berthezène, Yves; Fromageau, Jérémie; Bush, Nigel; Bamber, Jeffrey
2011-03-01
We previously developed a 2D locally regularized strain estimation technique that was already validated with ex vivo tissues. In this study, our technique is assessed with in vivo data, by examining breast abnormalities in clinical conditions. Method reliability is analyzed as well as tissue strain fields according to the benign or malignant character of the lesion. Ultrasound RF data were acquired in two centers on ten lesions, five being classified as fibroadenomas, the other five being classified as malignant tumors, mainly ductal carcinomas from grades I to III. The estimation procedure we developed involves maximizing a similarity criterion (the normalized correlation coefficient or NCC) between pre- and post-compression images, the deformation effects being considered. The probability of correct strain estimation is higher if this coefficient is closer to 1. Results demonstrated the ability of our technique to provide good-quality strain images with clinical data. For all lesions, movies of tissue strain during compression were obtained, with strains that can reach 15%. The NCC averaged over each movie was computed, leading for the ten cases to a mean value of 0.93, a minimum value of 0.87 and a maximum value of 0.98. These high NCC values confirm the reliability of the strain estimation. Moreover, lesions were clearly identified for the ten cases investigated. Finally, we have observed with malignant lesions that compared to ultrasound data, strain images can put in relief a more important lesion size, and can help in evaluating the lesion invasive character.
Total variation regularization with bounded linear variations
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2016-09-01
One of the most known techniques for signal denoising is based on total variation regularization (TV regularization). A better understanding of TV regularization is necessary to provide a stronger mathematical justification for using TV minimization in signal processing. In this work, we deal with an intermediate case between one- and two-dimensional cases; that is, a discrete function to be processed is two-dimensional radially symmetric piecewise constant. For this case, the exact solution to the problem can be obtained as follows: first, calculate the average values over rings of the noisy function; second, calculate the shift values and their directions using closed formulae depending on a regularization parameter and structure of rings. Despite the TV regularization is effective for noise removal; it often destroys fine details and thin structures of images. In order to overcome this drawback, we use the TV regularization for signal denoising subject to linear signal variations are bounded.
Transport Code for Regular Triangular Geometry
1993-06-09
DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
NASA Astrophysics Data System (ADS)
Tawfik, Abdel Nasser; Diab, Abdel Magied; Hussein, M. T.
2016-11-01
In mean field approximation, the grand canonical potential of SU(3) Polyakov linear-σ model (PLSM) is analyzed for chiral phase transition, σl and σs and for deconfinement order-parameters, ϕ and ϕ∗ of light- and strange-quarks, respectively. Various PLSM parameters are determined from the assumption of global minimization of the real part of the potential. Then, we have calculated the subtracted condensates (Δl,s). All these results are compared with recent lattice QCD simulations. Accordingly, essential PLSM parameters are determined. The modeling of the relaxation time is utilized in estimating the conductivity properties of the QCD matter in thermal medium, namely electric [σel(T)] and heat [κ(T)] conductivities. We found that the PLSM results on the electric conductivity and on the specific heat agree well with the available lattice QCD calculations. Also, we have calculated bulk and shear viscosities normalized to the thermal entropy, ξ/s and η/s, respectively, and compared them with recent lattice QCD. Predictions for (ξ/s)/(σel/T) and (η/s)/(σel/T) are introduced. We conclude that our results on various transport properties show some essential ingredients, that these properties likely come up with, in studying QCD matter in thermal and dense medium.
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2007-01-01
We present a new scheme to regularize a three-dimensional two-body problem under perturbations. It is a combination of Sundman's time transformation and Levi-Civita's spatial coordinate transformation applied to the two-dimensional components of the position and velocity vectors in the osculating orbital plane. We adopt a coordinate triad specifying the plane as a function of the orbital angular momentum vector only. Since the magnitude of the orbital angular momentum is explicitly computed from the in-the-plane components of the position and velocity vectors, only two components of the orbital angular momentum vector are to be determined. In addition to these, we select the total energy of the two-body system and the physical time as additional components of the new variables. The equations of motion of the new variables have no singularity even when the mutual distance is extremely small, and therefore, the new variables are suitable to deal with close encounters. As a result, the number of dependent variables in the new scheme becomes eight, which is significantly smaller than the existing schemes to avoid close encounters: two less than the Kustaanheimo-Stiefel and the Bürdet-Ferrandiz regularizations, and five less than the Sperling-Bürdet/Bürdet-Heggie regularization.
Iterated fractional Tikhonov regularization
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Buccini, Alessandro; Donatelli, Marco; Serra-Capizzano, Stefano
2015-05-01
Fractional Tikhonov regularization methods have been recently proposed to reduce the oversmoothing property of the Tikhonov regularization in standard form, in order to preserve the details of the approximated solution. Their regularization and convergence properties have been previously investigated showing that they are of optimal order. This paper provides saturation and converse results on their convergence rates. Using the same iterative refinement strategy of iterated Tikhonov regularization, new iterated fractional Tikhonov regularization methods are introduced. We show that these iterated methods are of optimal order and overcome the previous saturation results. Furthermore, nonstationary iterated fractional Tikhonov regularization methods are investigated, establishing their convergence rate under general conditions on the iteration parameters. Numerical results confirm the effectiveness of the proposed regularization iterations.
Wavelet Characterizations of Multi-Directional Regularity
NASA Astrophysics Data System (ADS)
Slimane, Mourad Ben
2011-05-01
The study of d dimensional traces of functions of m several variables leads to directional behaviors. The purpose of this paper is two-fold. Firstly, we extend the notion of one direction pointwise Hölder regularity introduced by Jaffard to multi-directions. Secondly, we characterize multi-directional pointwise regularity by Triebel anisotropic wavelet coefficients (resp. leaders), and also by Calderón anisotropic continuous wavelet transform.
Partitioning of regular computation on multiprocessor systems
NASA Technical Reports Server (NTRS)
Lee, Fung Fung
1988-01-01
Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.
Partitioning of regular computation on multiprocessor systems
NASA Technical Reports Server (NTRS)
Lee, Fung F.
1990-01-01
Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.
Partitioning of regular computation on multiprocessor systems
Lee, F. . Computer Systems Lab.)
1990-07-01
Problem partitioning of regular computation over two-dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.
Continuum regularization of gauge theory with fermions
Chan, H.S.
1987-03-01
The continuum regularization program is discussed in the case of d-dimensional gauge theory coupled to fermions in an arbitrary representation. Two physically equivalent formulations are given. First, a Grassmann formulation is presented, which is based on the two-noise Langevin equations of Sakita, Ishikawa and Alfaro and Gavela. Second, a non-Grassmann formulation is obtained by regularized integration of the matter fields within the regularized Grassmann system. Explicit perturbation expansions are studied in both formulations, and considerable simplification is found in the integrated non-Grassmann formalism.
Regularization of B-Spline Objects.
Xu, Guoliang; Bajaj, Chandrajit
2011-01-01
By a d-dimensional B-spline object (denoted as ), we mean a B-spline curve (d = 1), a B-spline surface (d = 2) or a B-spline volume (d = 3). By regularization of a B-spline object we mean the process of relocating the control points of such that they approximate an isometric map of its definition domain in certain directions and is shape preserving. In this paper we develop an efficient regularization method for , d = 1, 2, 3 based on solving weak form L(2)-gradient flows constructed from the minimization of certain regularizing energy functionals. These flows are integrated via the finite element method using B-spline basis functions. Our experimental results demonstrate that our new regularization method is very effective.
Sparsity regularized image reconstruction
NASA Astrophysics Data System (ADS)
Hero, Alfred
2015-03-01
Most image reconstruction problems are under-determined: there are far more pixels to be resolved than there are measurements available. This means that the image space has more degrees of freedom than the measurement space. To make headway in such under-determined image reconstruction problems one must either incorporate domain knowledge or regularize. Domain knowledge restricts the size of the image space while regularization introduces bias, e.g., by forcing the reconstructed image to be smooth or have limited support. Both approaches are equivalent and can be interpreted as making the image sparse in some domain. This paper will provide a selective overview of some of the principal methods of sparsity regularized image reconstruction.
Bronnikov, K A; Fabris, J C
2006-06-30
We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.
Regularized Structural Equation Modeling.
Jacobucci, Ross; Grimm, Kevin J; McArdle, John J
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.
Synchronization of Regular Automata
NASA Astrophysics Data System (ADS)
Caucal, Didier
Functional graph grammars are finite devices which generate the class of regular automata. We recall the notion of synchronization by grammars, and for any given grammar we consider the class of languages recognized by automata generated by all its synchronized grammars. The synchronization is an automaton-related notion: all grammars generating the same automaton synchronize the same languages. When the synchronizing automaton is unambiguous, the class of its synchronized languages forms an effective boolean algebra lying between the classes of regular languages and unambiguous context-free languages. We additionally provide sufficient conditions for such classes to be closed under concatenation and its iteration.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2017-01-27
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Regular transport dynamics produce chaotic travel times
NASA Astrophysics Data System (ADS)
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Geometry of spinor regularization
NASA Technical Reports Server (NTRS)
Hestenes, D.; Lounesto, P.
1983-01-01
The Kustaanheimo theory of spinor regularization is given a new formulation in terms of geometric algebra. The Kustaanheimo-Stiefel matrix and its subsidiary condition are put in a spinor form directly related to the geometry of the orbit in physical space. A physically significant alternative to the KS subsidiary condition is discussed. Derivations are carried out without using coordinates.
ERIC Educational Resources Information Center
Sokol, William
This autoinstructional unit deals with the phenomena of regularity in chemical behavior. The prerequisites suggested are two other autoinstructional lessons (Experiments 1 and 2) identified in the Del Mod System as SE 018 020 and SE 018 023. The equipment needed is listed and 45 minutes is the suggested time allotment. The Student Guide includes…
Comments on Regularization Ambiguities and Local Gauge Symmetries
NASA Astrophysics Data System (ADS)
Casana, R.; Pimentel, B. M.
We study the regularization ambiguities in an exact renormalized (1 +1)-dimensional field theory. We show a relation between the regularization ambiguities and the coupling parameters of the theory as well as their role in the implementation of a local gauge symmetry at quantum level.
Asymptotic inequalities on the parameters of a strongly regular graph
NASA Astrophysics Data System (ADS)
Vieira, Luís António de Almeida
2017-07-01
We first consider a strongly regular G whose adjacency matrix is A, next we associate a real three dimensional Euclidean Jordan algebra 𝒜 with rank three to the matrix A. Finally, from the analyze of the spectra of a binomial Hadamard Series of an element of 𝒜 we establish asymptotical inequalities on the parameters of a strongly regular graph.
Forghan, B. Takook, M.V.; Zarei, A.
2012-09-15
In this paper, the electron self-energy, photon self-energy and vertex functions are explicitly calculated in Krein space quantization including quantum metric fluctuation. The results are automatically regularized or finite. The magnetic anomaly and Lamb shift are also calculated in the one loop approximation in this method. Finally, the obtained results are compared to conventional QED results. - Highlights: Black-Right-Pointing-Pointer Krein regularization yields finite values for photon and electron self-energies and vertex function. Black-Right-Pointing-Pointer The magnetic anomaly is calculated and is exactly the same as the conventional result. Black-Right-Pointing-Pointer The Lamb shift is calculated and is approximately the same as in Hilbert space.
Regularized Hamiltonians and Spinfoams
NASA Astrophysics Data System (ADS)
Alesci, Emanuele
2012-05-01
We review a recent proposal for the regularization of the scalar constraint of General Relativity in the context of LQG. The resulting constraint presents strengths and weaknesses compared to Thiemann's prescription. The main improvement is that it can generate the 1-4 Pachner moves and its matrix elements contain 15j Wigner symbols, it is therefore compatible with the spinfoam formalism: the drawback is that Thiemann anomaly free proof is spoiled because the nodes that the constraint creates have volume.
Surface counterterms and regularized holographic complexity
NASA Astrophysics Data System (ADS)
Yang, Run-Qiu; Niu, Chao; Kim, Keun-Young
2017-09-01
The holographic complexity is UV divergent. As a finite complexity, we propose a "regularized complexity" by employing a similar method to the holographic renor-malization. We add codimension-two boundary counterterms which do not contain any boundary stress tensor information. It means that we subtract only non-dynamic back-ground and all the dynamic information of holographic complexity is contained in the regularized part. After showing the general counterterms for both CA and CV conjectures in holographic spacetime dimension 5 and less, we give concrete examples: the BTZ black holes and the four and five dimensional Schwarzschild AdS black holes. We propose how to obtain the counterterms in higher spacetime dimensions and show explicit formulas only for some special cases with enough symmetries. We also compute the complexity of formation by using the regularized complexity.
Perturbations in a regular bouncing universe
Battefeld, T.J.; Geshnizjani, G.
2006-03-15
We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Numerical Comparison of Two-Body Regularizations
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2007-06-01
We numerically compare four schemes to regularize a three-dimensional two-body problem under perturbations: the Sperling-Bürdet (S-B), Kustaanheimo-Stiefel (K-S), and Bürdet-Ferrandiz (B-F) regularizations, and a three-dimensional extension of the Levi-Civita (L-C) regularization we developed recently. As for the integration time of the equation of motion, the least time is needed for the unregularized treatment, followed by the K-S, the extended L-C, the B-F, and the S-B regularizations. However, these differences become significantly smaller when the time to evaluate perturbations becomes dominant. As for the integration error after one close encounter, the K-S and the extended L-C regularizations are tied for the least error, followed by the S-B, the B-F, and finally the unregularized scheme for unperturbed orbits with eccentricity less than 2. This order is not changed significantly by various kinds of perturbations. As for the integration error of elliptical orbits after multiple orbital periods, the situation remains the same except for the rank of the S-B scheme, which varies from the best to the second worst depending on the length of integration and/or on the nature of perturbations. Also, we confirm that Kepler energy scaling enhances the performance of the unregularized, K-S, and extended L-C schemes. As a result, the K-S and the extended L-C regularizations with Kepler energy scaling provide the best cost performance in integrating almost all the perturbed two-body problems.
The analyzation of 2D complicated regular polygon photonic lattice
NASA Astrophysics Data System (ADS)
Lv, Jing; Gao, Yuanmei
2017-06-01
We have numerically simulated the light intensity distribution, phase distribution, far-field diffraction of the two dimensional (2D) regular octagon and regular dodecagon lattices in detail. In addition, using the plane wave expansion (PWE) method, we numerically calculate the energy band of the two lattices. Both of the photonic lattices have the band gap. And the regular octagon lattice possesses the wide complete band gap while the regular dodecagon lattice has the incomplete gap. Moreover, we simulated the preliminary transmission image of photonic lattices. It may inspire the academic research both in light control and soliton.
1973-10-01
The theory of strongly regular graphs was introduced by Bose r7 1 in 1963, in connection with partial geometries and 2 class association schemes. One...non adjacent vertices is constant and equal to ~. We shall denote by ~(p) (reap.r(p)) the set of vertices adjacent (resp.non adjacent) to a vertex p...is the complement of .2’ if the set of vertices of ~ is the set of vertices of .2’ and if two vertices in .2’ are adjacent if and only if they were
Flexible sparse regularization
NASA Astrophysics Data System (ADS)
Lorenz, Dirk A.; Resmerita, Elena
2017-01-01
The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.
Regularized versus non-regularized statistical reconstruction techniques
NASA Astrophysics Data System (ADS)
Denisova, N. V.
2011-08-01
An important feature of positron emission tomography (PET) and single photon emission computer tomography (SPECT) is the stochastic property of real clinical data. Statistical algorithms such as ordered subset-expectation maximization (OSEM) and maximum a posteriori (MAP) are a direct consequence of the stochastic nature of the data. The principal difference between these two algorithms is that OSEM is a non-regularized approach, while the MAP is a regularized algorithm. From the theoretical point of view, reconstruction problems belong to the class of ill-posed problems and should be considered using regularization. Regularization introduces an additional unknown regularization parameter into the reconstruction procedure as compared with non-regularized algorithms. However, a comparison of non-regularized OSEM and regularized MAP algorithms with fixed regularization parameters has shown very minor difference between reconstructions. This problem is analyzed in the present paper. To improve the reconstruction quality, a method of local regularization is proposed based on the spatially adaptive regularization parameter. The MAP algorithm with local regularization was tested in reconstruction of the Hoffman brain phantom.
Dimensional Reduction and Hadronic Processes
Signer, Adrian; Stoeckinger, Dominik
2008-11-23
We consider the application of regularization by dimensional reduction to NLO corrections of hadronic processes. The general collinear singularity structure is discussed, the origin of the regularization-scheme dependence is identified and transition rules to other regularization schemes are derived.
Mainstreaming the Regular Classroom Student.
ERIC Educational Resources Information Center
Kahn, Michael
The paper presents activities, suggested by regular classroom teachers, to help prepare the regular classroom student for mainstreaming. The author points out that regular classroom children need a vehicle in which curiosity, concern, interest, fear, attitudes and feelings can be fully explored, where prejudices can be dispelled, and where the…
NASA Astrophysics Data System (ADS)
Pan, Zan; Cui, Zhu-Fang; Chang, Chao-Hsi; Zong, Hong-Shi
2017-05-01
To investigate the finite-volume effects on the chiral symmetry restoration and the deconfinement transition for a quantum chromodynamics (QCD) system with Nf = 2 (two quark flavors), we apply the Polyakov-loop extended Nambu-Jona-Lasinio model by introducing a chiral chemical potential μ5 artificially. The final numerical results indicate that the introduced chiral chemical potential does not change the critical exponents, but shifts the location of critical end point (CEP) significantly; the ratios for the chiral chemical potentials and temperatures at CEP, μc/μ5c and Tc/T5c, are significantly affected by the system size R. The behavior is that Tc increases slowly with μ5 when R is “large” and Tc decreases first and then increases with μ5 when R is “small.” It is also found that for a fixed μ5, there is a Rmin, where the critical end point vanishes and the whole phase diagram becomes a crossover when R < Rmin. Therefore, we suggest that for the heavy-ion collision experiments, which is to study the possible location of CEP, the finite-volume behavior should be taken into account.
NASA Astrophysics Data System (ADS)
Agrawal, Vaibhav; Dayal, Kaushik
2015-12-01
The motion of microstructural interfaces is important in modeling twinning and structural phase transformations. Continuum models fall into two classes: sharp-interface models, where interfaces are singular surfaces; and regularized-interface models, such as phase-field models, where interfaces are smeared out. The former are challenging for numerical solutions because the interfaces need to be explicitly tracked, but have the advantage that the kinetics of existing interfaces and the nucleation of new interfaces can be transparently and precisely prescribed. In contrast, phase-field models do not require explicit tracking of interfaces, thereby enabling relatively simple numerical calculations, but the specification of kinetics and nucleation is both restrictive and extremely opaque. This prevents straightforward calibration of phase-field models to experiment and/or molecular simulations, and breaks the multiscale hierarchy of passing information from atomic to continuum. Consequently, phase-field models cannot be confidently used in dynamic settings. This shortcoming of existing phase-field models motivates our work. We present the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients are a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation - through the source term of the conservation law - and kinetics - through a distinct interfacial velocity field. A formal limit of the kinetic driving force recovers the classical continuum sharp-interface driving force, providing confidence in both the re-parametrized energy and the evolution statement. We present some 1D calculations characterizing the formulation; in a
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Chaos regularization of quantum tunneling rates.
Pecora, Louis M; Lee, Hoshik; Wu, Dong-Ho; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-06-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Regular black holes with flux tube core
Zaslavskii, Oleg B.
2009-09-15
We consider a class of black holes for which the area of the two-dimensional spatial cross section has a minimum on the horizon with respect to a quasiglobal (Krusckal-like) coordinate. If the horizon is regular, one can generate a tubelike counterpart of such a metric and smoothly glue it to a black hole region. The resulting composite space-time is globally regular, so all potential singularities under the horizon of the original metrics are removed. Such a space-time represents a black hole without an apparent horizon. It is essential that the matter should be nonvacuum in the outer region but vacuumlike in the inner one. As an example we consider the noninteracting mixture of vacuum fluid and matter with a linear equation of state and scalar phantom fields. This approach is extended to distorted metrics, with the requirement of spherical symmetry relaxed.
Consistent regularization and renormalization in models with inhomogeneous phases
NASA Astrophysics Data System (ADS)
Adhikari, Prabal; Andersen, Jens O.
2017-02-01
In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
On regular rotating black holes
NASA Astrophysics Data System (ADS)
Torres, R.; Fayos, F.
2017-01-01
Different proposals for regular rotating black hole spacetimes have appeared recently in the literature. However, a rigorous analysis and proof of the regularity of this kind of spacetimes is still lacking. In this note we analyze rotating Kerr-like black hole spacetimes and find the necessary and sufficient conditions for the regularity of all their second order scalar invariants polynomial in the Riemann tensor. We also show that the regularity is linked to a violation of the weak energy conditions around the core of the rotating black hole.
Regular polygons in taxicab geometry
NASA Astrophysics Data System (ADS)
Hanson, J. R.
2014-10-01
A polygon of n sides will be called regular in taxicab geometry if it has n equal angles and n sides of equal taxicab length. This paper will show that there are no regular taxicab triangles and no regular taxicab pentagons. The sets of taxicab rectangles and taxicab squares will be shown to be the same, respectively, as the sets of Euclidean rectangles and Euclidean squares. A method of construction for a regular taxicab 2n-gon for any n will be demonstrated.
Linear regularity and [phi]-regularity of nonconvex sets
NASA Astrophysics Data System (ADS)
Ng, Kung Fu; Zang, Rui
2007-04-01
In this paper, we discuss some sufficient conditions for the linear regularity and bounded linear regularity (and their variations) of finitely many closed (not necessarily convex) sets in a normed vector space. The accompanying necessary conditions are also given in the setting of Asplund spaces.
NASA Astrophysics Data System (ADS)
Agrawal, Vaibhav; Dayal, Kaushik
2015-12-01
A companion paper presented the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients were a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation through the source term of the conservation law and of kinetics through an interfacial velocity field. This model overcomes an important shortcoming of existing phase-field models, namely that the specification of kinetics and nucleation is both restrictive and extremely opaque. In this paper, we present a number of numerical calculations - in one and two dimensions - that characterize our formulation. These calculations illustrate (i) highly-sensitive rate-dependent nucleation; (ii) independent prescription of the forward and backward nucleation stresses without changing the energy landscape; (iii) stick-slip interface kinetics; (iii) the competition between nucleation and kinetics in determining the final microstructural state; (iv) the effect of anisotropic kinetics; and (v) the effect of non-monotone kinetics. These calculations demonstrate the ability of this formulation to precisely prescribe complex nucleation and kinetics in a simple and transparent manner. We also extend our conservation statement to describe the kinetics of the junction lines between microstructural interfaces and boundaries. This enables us to prescribe an additional kinetic relation for the boundary, and we examine the interplay between the bulk kinetics and the junction kinetics.
NASA Astrophysics Data System (ADS)
Tawfik, Abdel Nasser; Magdy, Niseem
2015-01-01
Effects of an external magnetic field on various properties of quantum chromodynamics (QCD) matter under extreme conditions of temperature and density (chemical potential) have been analyzed. To this end, we use SU(3) Polyakov linear-σ model and assume that the external magnetic field (e B ) adds some restrictions to the quarks' energy due to the existence of free charges in the plasma phase. In doing this, we apply the Landau theory of quantization, which assumes that the cyclotron orbits of charged particles in a magnetic field should be quantized. This requires an additional temperature to drive the system through the chiral phase transition. Accordingly, the dependence of the critical temperature of chiral and confinement phase transitions on the magnetic field is characterized. Based on this, we have studied the thermal evolution of thermodynamic quantities (energy density and trace anomaly) and the first four higher-order moment of particle multiplicity. Having all these calculations, we have studied the effects of the magnetic field on the chiral phase transition. We found that both critical temperature Tc and critical chemical potential increase with increasing magnetic field, e B . Last but not least, the magnetic effects of the thermal evolution of four scalar and four pseudoscalar meson states are studied. We concluded that the meson masses decrease as the temperature increases up to Tc. Then, the vacuum effect becomes dominant and rapidly increases with the temperature T . At low T , the scalar meson masses normalized to the lowest Matsubara frequency rapidly decrease as T increases. Then, starting from Tc, we find that the thermal dependence almost vanishes. Furthermore, the meson masses increase with increasing magnetic field. This gives a characteristic phase diagram of T vs external magnetic field e B . At high T , we find that the masses of almost all meson states become temperature independent. It is worthwhile to highlight that the various meson
Some results on the spectra of strongly regular graphs
NASA Astrophysics Data System (ADS)
Vieira, Luís António de Almeida; Mano, Vasco Moço
2016-06-01
Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.
Quaternion regularization and stabilization of perturbed central motion. II
NASA Astrophysics Data System (ADS)
Chelnokov, Yu. N.
1993-04-01
Generalized regular quaternion equations for the three-dimensional two-body problem in terms of Kustaanheimo-Stiefel variables are obtained within the framework of the quaternion theory of regularizing and stabilizing transformations of the Newtonian equations for perturbed central motion. Regular quaternion equations for perturbed central motion of a material point in a central field with a certain potential Pi are also derived in oscillatory and normal forms. In addition, systems of perturbed central motion equations are obtained which include quaternion equations of perturbed orbit orientations in oscillatory or normal form, and a generalized Binet equation is derived. A comparative analysis of the equations is carried out.
Low-Rank Matrix Factorization With Adaptive Graph Regularizer.
Lu, Gui-Fu; Wang, Yong; Zou, Jian
2016-05-01
In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... From the Federal Register Online via the Government Publishing Office FARM CREDIT SYSTEM INSURANCE CORPORATION Farm Credit System Insurance Corporation Board Regular Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-07
... Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm Credit Administration in McLean, Virginia, on December 9, 2010, from 12:30 p.m. until such time as the Board concludes... CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting...
Regularly timed events amid chaos
NASA Astrophysics Data System (ADS)
Blakely, Jonathan N.; Cooper, Roy M.; Corron, Ned J.
2015-11-01
We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events.
Quantum Ergodicity on Regular Graphs
NASA Astrophysics Data System (ADS)
Anantharaman, Nalini
2017-07-01
We give three different proofs of the main result of Anantharaman and Le Masson (Duke Math J 164(4):723-765, 2015), establishing quantum ergodicity—a form of delocalization—for eigenfunctions of the laplacian on large regular graphs of fixed degree. These three proofs are much shorter than the original one, quite different from one another, and we feel that each of the four proofs sheds a different light on the problem. The goal of this exploration is to find a proof that could be adapted for other models of interest in mathematical physics, such as the Anderson model on large regular graphs, regular graphs with weighted edges, or possibly certain models of non-regular graphs. A source of optimism in this direction is that we are able to extend the last proof to the case of anisotropic random walks on large regular graphs.
Rotating regular black hole solution
NASA Astrophysics Data System (ADS)
Abdujabbarov, Ahmadjon
2016-07-01
Based on the Newman-Janis algorithm, the Ayón-Beato-García spacetime metric [Phys. Rev. Lett. 80, 5056 (1998)] of the regular spherically symmetric, static, and charged black hole has been converted into rotational form. It is shown that the derived solution for rotating a regular black hole is regular and the critical value of the electric charge for which two horizons merge into one sufficiently decreases in the presence of the nonvanishing rotation parameter a of the black hole.
Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions
NASA Astrophysics Data System (ADS)
Lin, Hongxia; Du, Lili
2013-01-01
In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263-74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier-Stokes equations Indiana Univ. Math. J. 57 2643-61 2011 Gobal regularity criterion for the 3D Navier-Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919-32) for 3D incompressible Navier-Stokes equations.
On partial regularity problem for 3D Boussinesq equations
NASA Astrophysics Data System (ADS)
Fang, Daoyuan; Liu, Chun; Qian, Chenyin
2017-10-01
In this paper, we study the partial regularity of the solutions for the three-dimensional Boussinesq equations. We first prove a criterion of local Hölder continuous of the suitable weak solutions of the Boussinesq equations, and show that one-dimensional Hausdorff measure of the singular point set is zero. Secondly, we present a local uniform gradient estimate on the suitable weak solutions and assert that the local behavior of the solution can be dominated by some scaled quantities, such as the scaled local L3-norm of the velocity. Besides, when the initial data v0 and θ0 decay sufficiently rapidly at ∞, the distribution of the regular point set of the suitable weak solutions is also considered. Based on it, one can find that MHD equations are more similar to Navier-Stokes equations than Boussinesq equations. Finally, we give a local regularity criterion of the suitable weak solutions near the boundary.
NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION
CHARTRAND, RICK
2007-01-16
The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
Degenerate Regularization of Forward-Backward Parabolic Equations: The Regularized Problem
NASA Astrophysics Data System (ADS)
Smarrazzo, Flavia; Tesei, Alberto
2012-04-01
We study a quasilinear parabolic equation of forward-backward type in one space dimension, under assumptions on the nonlinearity which hold for a number of important mathematical models (for example, the one-dimensional Perona-Malik equation), using a degenerate pseudoparabolic regularization proposed in Barenblatt et al. (SIAM J Math Anal 24:1414-1439, 1993), which takes time delay effects into account. We prove existence and uniqueness of positive solutions of the regularized problem in a space of Radon measures. We also study qualitative properties of such solutions, in particular concerning their decomposition into an absolutely continuous part and a singular part with respect to the Lebesgue measure. In this respect, the existence of a family of viscous entropy inequalities plays an important role.
Geometric continuum regularization of quantum field theory
Halpern, M.B. . Dept. of Physics)
1989-11-08
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.
Word regularity affects orthographic learning.
Wang, Hua-Chen; Castles, Anne; Nickels, Lyndsey
2012-01-01
Share's self-teaching hypothesis proposes that orthographic representations are acquired via phonological decoding. A key, yet untested, prediction of this theory is that there should be an effect of word regularity on the number and quality of word-specific orthographic representations that children acquire. Thirty-four Grade 2 children were exposed to the sound and meaning of eight novel words and were then presented with those words in written form in short stories. Half the words were assigned regular pronunciations and half irregular pronunciations. Lexical decision and spelling tasks conducted 10 days later revealed that the children's orthographic representations of the regular words appeared to be stronger and more extensive than those of the irregular words.
Conformal regularization of Einstein's field equations
NASA Astrophysics Data System (ADS)
Röhr, Niklas; Uggla, Claes
2005-09-01
To study asymptotic structures, we regularize Einstein's field equations by means of conformal transformations. The conformal factor is chosen so that it carries a dimensional scale that captures crucial asymptotic features. By choosing a conformal orthonormal frame, we obtain a coupled system of differential equations for a set of dimensionless variables, associated with the conformal dimensionless metric, where the variables describe ratios with respect to the chosen asymptotic scale structure. As examples, we describe some explicit choices of conformal factors and coordinates appropriate for the situation of a timelike congruence approaching a singularity. One choice is shown to just slightly modify the so-called Hubble-normalized approach, and one leads to dimensionless first-order symmetric hyperbolic equations. We also discuss differences and similarities with other conformal approaches in the literature, as regards, e.g., isotropic singularities.
Resource Guide for Regular Teachers.
ERIC Educational Resources Information Center
Kampert, George J.
The resource guide for regular teachers provides policies and procedures of the Flour Bluff (Texas) school district regarding special education of handicapped students. Individual sections provide guidelines for the following areas: the referral process; individual assessment; participation on student evaluation and placement committee; special…
Sparsity regularization in dynamic elastography.
Honarvar, M; Sahebjavaher, R S; Salcudean, S E; Rohling, R
2012-10-07
We consider the inverse problem of continuum mechanics with the tissue deformation described by a mixed displacement-pressure finite element formulation. The mixed formulation is used to model nearly incompressible materials by simultaneously solving for both elasticity and pressure distributions. To improve numerical conditioning, a common solution to this problem is to use regularization to constrain the solutions of the inverse problem. We present a sparsity regularization technique that uses the discrete cosine transform to transform the elasticity and pressure fields to a sparse domain in which a smaller number of unknowns is required to represent the original field. We evaluate the approach by solving the dynamic elastography problem for synthetic data using such a mixed finite element technique, assuming time harmonic motion, and linear, isotropic and elastic behavior for the tissue. We compare our simulation results to those obtained using the more common Tikhonov regularization. We show that the sparsity regularization is less dependent on boundary conditions, less influenced by noise, requires no parameter tuning and is computationally faster. The algorithm has been tested on magnetic resonance elastography data captured from a CIRS elastography phantom with similar results as the simulation.
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Giftedness in the Regular Classroom.
ERIC Educational Resources Information Center
Green, Anne
This paper presents a rationale for serving gifted students in the regular classroom and offers guidelines for recognizing students who are gifted in the seven types of intelligence proposed by Howard Gardner. Stressed is the importance of creating in the classroom a community of learners that allows all children to actively explore ideas and…
Regularization of Localized Degradation Processes
1996-12-28
order to assess the regularization properties of non-classical micropolar Cosserat continua which feature non-symmetric stress and strain tensors because...of the presence of couple-stresses and micro-curvatures. It was shown that micropolar media may only exhibit localized failure in the form of tensile
Temporal regularity in speech perception: Is regularity beneficial or deleterious?
Geiser, Eveline; Shattuck-Hufnagel, Stefanie
2012-04-01
Speech rhythm has been proposed to be of crucial importance for correct speech perception and language learning. This study investigated the influence of speech rhythm in second language processing. German pseudo-sentences were presented to participants in two conditions: 'naturally regular speech rhythm' and an 'emphasized regular rhythm'. Nine expert English speakers with 3.5±1.6 years of German training repeated each sentence after hearing it once over headphones. Responses were transcribed using the International Phonetic Alphabet and analyzed for the number of correct, false and missing consonants as well as for consonant additions. The over-all number of correct reproductions of consonants did not differ between the two experimental conditions. However, speech rhythmicization significantly affected the serial position curve of correctly reproduced syllables. The results of this pilot study are consistent with the view that speech rhythm is important for speech perception.
Regular languages, regular grammars and automata in splicing systems
NASA Astrophysics Data System (ADS)
Mohamad Jan, Nurhidaya; Fong, Wan Heng; Sarmin, Nor Haniza
2013-04-01
Splicing system is known as a mathematical model that initiates the connection between the study of DNA molecules and formal language theory. In splicing systems, languages called splicing languages refer to the set of double-stranded DNA molecules that may arise from an initial set of DNA molecules in the presence of restriction enzymes and ligase. In this paper, some splicing languages resulted from their respective splicing systems are shown. Since all splicing languages are regular, languages which result from the splicing systems can be further investigated using grammars and automata in the field of formal language theory. The splicing language can be written in the form of regular languages generated by grammar. Besides that, splicing systems can be accepted by automata. In this research, two restriction enzymes are used in splicing systems namely BfuCI and NcoI.
Regular Motions of Resonant Asteroids
NASA Astrophysics Data System (ADS)
Ferraz-Mello, S.
1990-11-01
RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Diffusion on regular random fractals
NASA Astrophysics Data System (ADS)
Aarão Reis, Fábio D. A.
1996-12-01
We study random walks on structures intermediate to statistical and deterministic fractals called regular random fractals, constructed introducing randomness in the distribution of lacunas of Sierpinski carpets. Random walks are simulated on finite stages of these fractals and the scaling properties of the mean square displacement 0305-4470/29/24/007/img1 of N-step walks are analysed. The anomalous diffusion exponents 0305-4470/29/24/007/img2 obtained are very near the estimates for the carpets with the same dimension. This result motivates a discussion on the influence of some types of lattice irregularity (random structure, dead ends, lacunas) on 0305-4470/29/24/007/img2, based on results on several fractals. We also propose to use these and other regular random fractals as models for real self-similar structures and to generalize results for statistical systems on fractals.
Regular connections among generalized connections
NASA Astrophysics Data System (ADS)
Fleischhack, Christian
2003-09-01
The properties of the space A of regular connections as a subset of the space Ā of generalized connections in the Ashtekar framework are studied. For every choice of compact structure group and smoothness category for the paths, it is determined whether A is dense in Ā or not. Moreover, it is proven that A has Ashtekar-Lewandowski measure zero for every non-trivial structure group and every smoothness category. The analogous results hold for gauge orbits instead of connections.
On different facets of regularization theory.
Chen, Zhe; Haykin, Simon
2002-12-01
This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the spatial (Fourier) domain. State-of-the-art research relevant to the regularization theory is reviewed, covering Occam's razor, minimum length description, Bayesian theory, pruning algorithms, informational (entropy) theory, statistical learning theory, and equivalent regularization. The universal principle of regularization in terms of Kolmogorov complexity is discussed. Finally, some prospective studies on regularization theory and beyond are suggested.
Regular sun exposure benefits health.
van der Rhee, H J; de Vries, E; Coebergh, J W
2016-12-01
Since it was discovered that UV radiation was the main environmental cause of skin cancer, primary prevention programs have been started. These programs advise to avoid exposure to sunlight. However, the question arises whether sun-shunning behaviour might have an effect on general health. During the last decades new favourable associations between sunlight and disease have been discovered. There is growing observational and experimental evidence that regular exposure to sunlight contributes to the prevention of colon-, breast-, prostate cancer, non-Hodgkin lymphoma, multiple sclerosis, hypertension and diabetes. Initially, these beneficial effects were ascribed to vitamin D. Recently it became evident that immunomodulation, the formation of nitric oxide, melatonin, serotonin, and the effect of (sun)light on circadian clocks, are involved as well. In Europe (above 50 degrees north latitude), the risk of skin cancer (particularly melanoma) is mainly caused by an intermittent pattern of exposure, while regular exposure confers a relatively low risk. The available data on the negative and positive effects of sun exposure are discussed. Considering these data we hypothesize that regular sun exposure benefits health. Copyright © 2016 Elsevier Ltd. All rights reserved.
Note on entanglement entropy and regularization in holographic interface theories
NASA Astrophysics Data System (ADS)
Gutperle, Michael; Trivella, Andrea
2017-03-01
We discuss the computation of holographic entanglement entropy for interface conformal field theories. The fact that globally well-defined Fefferman-Graham coordinates are difficult to construct makes the regularization of the holographic theory challenging. We introduce a simple new cutoff procedure, which we call "double cutoff" regularization. We test the new cutoff procedure by comparing the results for holographic entanglement entropies using other cutoff procedures and find agreement. We also study three dimensional conformal field theories with a two dimensional interface. In that case the dual bulk geometry is constructed using warped geometry with an AdS3 factor. We define an effective central charge to the interface through the Brown-Henneaux formula for the AdS3 factor. We investigate two concrete examples, showing that the same effective central charge appears in the computation of entanglement entropy and governs the conformal anomaly.
Creating Two-Dimensional Nets of Three-Dimensional Shapes Using "Geometer's Sketchpad"
ERIC Educational Resources Information Center
Maida, Paula
2005-01-01
This article is about a computer lab project in which prospective teachers used Geometer's Sketchpad software to create two-dimensional nets for three-dimensional shapes. Since this software package does not contain ready-made tools for creating non-regular or regular polygons, the students used prior knowledge and geometric facts to create their…
[Iterated Tikhonov Regularization for Spectral Recovery from Tristimulus].
Xie, De-hong; Li, Rui; Wan, Xiao-xia; Liu, Qiang; Zhu, Wen-feng
2016-01-01
Reflective spectra in a multispectral image can objectively and originally represent color information due to their high dimensionality, illuminant independent and device independent. Aiming to the problem of loss of spectral information when the spectral data reconstructed from three-dimensional colorimetric data in the trichromatic camera-based spectral image acquisition system and its subsequent problem of loss of color information, this work proposes an iterated Tikhonov regularization to reconstruct the reflectance spectra. First of all, according to relationship between the colorimetric value and the reflective spectra in the colorimetric theory, this work constructs a spectral reconstruction equation which can reconstruct high dimensional spectral data from three dimensional colorimetric data acquired by the trichromatic camera. Then, the iterated Tikhonov regularization, inspired by the idea of the pseudo inverse Moore-Penrose, is used to cope with the linear ill-posed inverse problem during solving the equation of reconstructing reflectance spectra. Meanwhile, the work also uses the L-curve method to obtain an optimal regularized parameter of the iterated Tikhonov regularization by training a set of samples. Through these methods, the ill condition of the spectral reconstruction equation can be effectively controlled and improved, and subsequently loss of spectral information of the reconstructed spectral data can be reduced. The verification experiment is performed under another set of training samples. The experimental results show that the proposed method reconstructs the reflective spectra with less spectral information loss in the trichromatic camera-based spectral image acquisition system, which reflects in obvious decreases of spectral errors and colorimetric errors compared with the previous method.
Graph Regularized Auto-Encoders for Image Representation.
Yiyi Liao; Yue Wang; Yong Liu
2017-06-01
Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.
Enumeration of Extended m-Regular Linear Stacks.
Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian
2016-12-01
The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].
Knowledge and regularity in planning
NASA Technical Reports Server (NTRS)
Allen, John A.; Langley, Pat; Matwin, Stan
1992-01-01
The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Wave dynamics of regular and chaotic rays
McDonald, S.W.
1983-09-01
In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space.
Natural frequency of regular basins
NASA Astrophysics Data System (ADS)
Tjandra, Sugih S.; Pudjaprasetya, S. R.
2014-03-01
Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.
Regularized degenerate multi-solitons
NASA Astrophysics Data System (ADS)
Correa, Francisco; Fring, Andreas
2016-09-01
We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.
Rule extraction by successive regularization.
Ishikawa, M
2000-12-01
Knowledge acquisition is, needless to say, important, because it is a key to the solution to one of the bottlenecks in artificial intelligence. Recently, knowledge acquisition using neural networks, called rule extraction, is attracting wide attention because of its computational simplicity and ability to generalize. Proposed in this paper is a novel approach to rule extraction named successive regularization. It generates a small number of dominant rules at an earlier stage and less dominant rules or exceptions at later stages. It has various advantages such as robustness of computation, better understanding, and similarity to child development. It is applied to the classification of mushrooms, the recognition of promoters in DNA sequences and the classification of irises. Empirical results indicate superior performance of rule extraction in terms of the number and the size of rules for explaining data.
Some Cosine Relations and the Regular Heptagon
ERIC Educational Resources Information Center
Osler, Thomas J.; Heng, Phongthong
2007-01-01
The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)
Regular Pentagons and the Fibonacci Sequence.
ERIC Educational Resources Information Center
French, Doug
1989-01-01
Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)
22 CFR 120.39 - Regular employee.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...
22 CFR 120.39 - Regular employee.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...
22 CFR 120.39 - Regular employee.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...
FPT Algorithm for Two-Dimensional Cyclic Convolutions
NASA Technical Reports Server (NTRS)
Truong, Trieu-Kie; Shao, Howard M.; Pei, D. Y.; Reed, Irving S.
1987-01-01
Fast-polynomial-transform (FPT) algorithm computes two-dimensional cyclic convolution of two-dimensional arrays of complex numbers. New algorithm uses cyclic polynomial convolutions of same length. Algorithm regular, modular, and expandable.
Nondissipative Velocity and Pressure Regularizations for the ICON Model
NASA Astrophysics Data System (ADS)
Restelli, M.; Giorgetta, M.; Hundertmark, T.; Korn, P.; Reich, S.
2009-04-01
formulation can be extended to the regularized systems retaining discrete conservation of mass and potential enstrophy. We also present some numerical results both in planar, doubly periodic geometry and in spherical geometry. These results show that our numerical formulation correctly approximates the behavior of the regularized models, and are a first step toward the use of the regularization idea within a complete, three-dimensional GCM. References [BR05] L. Bonaventura and T. Ringler. Analysis of discrete shallow-water models on geodesic Delaunay grids with C-type staggering. Mon. Wea. Rev., 133(8):2351-2373, August 2005. [HHPW08] M.W. Hecht, D.D. Holm, M.R. Petersen, and B.A. Wingate. Implementation of the LANS-α turbulence model in a primitive equation ocean model. J. Comp. Phys., 227(11):5691-5716, May 2008. [RWS07] S. Reich, N. Wood, and A. Staniforth. Semi-implicit methods, nonlinear balance, and regularized equations. Atmos. Sci. Lett., 8(1):1-6, 2007.
Monopole mass in the three-dimensional Georgi-Glashow model
NASA Astrophysics Data System (ADS)
Davis, A. C.; Hart, A.; Kibble, T. W.; Rajantie, A.
2002-06-01
We study the three-dimensional Georgi-Glashow model to demonstrate how magnetic monopoles can be studied fully nonperturbatively in lattice Monte Carlo simulations, without any assumptions about the smoothness of the field configurations. We examine the apparent contradiction between the conjectured analytic connection of the “broken” and “symmetric” phases, and the interpretation of the mass (i.e., the free energy) of the fully quantized ’t Hooft Polyakov monopole as an order parameter to distinguish the phases. We use Monte Carlo simulations to measure the monopole free energy and its first derivative with respect to the scalar mass. On small volumes we compare this to semiclassical predictions for the monopole. On large volumes we show that the free energy is screened to zero, signaling the formation of a confining monopole condensate. This screening does not allow the monopole mass to be interpreted as an order parameter, resolving the paradox.
Class of regular bouncing cosmologies
NASA Astrophysics Data System (ADS)
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
Automated graph regularized projective nonnegative matrix factorization for document clustering.
Pei, Xiaobing; Wu, Tao; Chen, Chuanbo
2014-10-01
In this paper, a novel projective nonnegative matrix factorization (PNMF) method for enhancing the clustering performance is presented, called automated graph regularized projective nonnegative matrix factorization (AGPNMF). The idea of AGPNMF is to extend the original PNMF by incorporating the automated graph regularized constraint into the PNMF decomposition. The key advantage of this approach is that AGPNMF simultaneously finds graph weights matrix and dimensionality reduction of data. AGPNMF seeks to extract the data representation space that preserves the local geometry structure. This character makes AGPNMF more intuitive and more powerful than the original method for clustering tasks. The kernel trick is used to extend AGPNMF model related to the input space by some nonlinear map. The proposed method has been applied to the problem of document clustering using the well-known Reuters-21578, TDT2, and SECTOR data sets. Our experimental evaluations show that the proposed method enhances the performance of PNMF for document clustering.
Manifestly scale-invariant regularization and quantum effective operators
NASA Astrophysics Data System (ADS)
Ghilencea, D. M.
2016-05-01
Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.
Note on Prodi-Serrin-Ladyzhenskaya type regularity criteria for the Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Tran, Chuong V.; Yu, Xinwei
2017-01-01
In this article, we prove new regularity criteria of the Prodi-Serrin-Ladyzhenskaya type for the Cauchy problem of the three-dimensional incompressible Navier-Stokes equations. Our results improve the classical Lr(0, T; Ls) regularity criteria for both velocity and pressure by factors of certain negative powers of the scaling invariant norms ↑" separators=" u ↑ L 3 and ↑" separators=" u ↑ H ˙ 1 / 2 .
Global regularity for a 3D Boussinesq model without thermal diffusion
NASA Astrophysics Data System (ADS)
Ye, Zhuan
2017-08-01
In this paper, we consider a modified three-dimensional incompressible Boussinesq model. The model considered in this paper has viscosity in the velocity equations, but no diffusivity in the temperature equation. To bypass the difficulty caused by the absence of thermal diffusion, we make use of the maximal L_tpL_xq regularity for the heat kernel to establish the global regularity result.
NASA Astrophysics Data System (ADS)
Ye, Zhuan
2016-12-01
This paper is devoted to the investigation of the regularity criterion to the two-dimensional (2D) Euler-Boussinesq equations with supercritical dissipation. By making use of the Littlewood-Paley technique, we provide an improved regularity criterion involving the temperature at the scaling invariant level, which improves the previous results.
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Elementary Particle Spectroscopy in Regular Solid Rewrite
NASA Astrophysics Data System (ADS)
Trell, Erik
2008-10-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it "is the likely keystone of a fundamental computational foundation" also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)×O(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Elementary Particle Spectroscopy in Regular Solid Rewrite
Trell, Erik
2008-10-17
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Testing times: regularities in the historical sciences.
Jeffares, Ben
2008-12-01
The historical sciences, such as geology, evolutionary biology, and archaeology, appear to have no means to test hypotheses. However, on closer examination, reasoning in the historical sciences relies upon regularities, regularities that can be tested. I outline the role of regularities in the historical sciences, and in the process, blur the distinction between the historical sciences and the experimental sciences: all sciences deploy theories about the world in their investigations.
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 16 intermediate adults (40–55), and 25 older adults (65–80). The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities). We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding), short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young), but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical implications of regularity
NASA Astrophysics Data System (ADS)
Bhatt, Manish; Acharya, Atithi; Yalavarthy, Phaneendra K.
2016-10-01
The model-based image reconstruction techniques for photoacoustic (PA) tomography require an explicit regularization. An error estimate (η2) minimization-based approach was proposed and developed for the determination of a regularization parameter for PA imaging. The regularization was used within Lanczos bidiagonalization framework, which provides the advantage of dimensionality reduction for a large system of equations. It was shown that the proposed method is computationally faster than the state-of-the-art techniques and provides similar performance in terms of quantitative accuracy in reconstructed images. It was also shown that the error estimate (η2) can also be utilized in determining a suitable regularization parameter for other popular techniques such as Tikhonov, exponential, and nonsmooth (ℓ1 and total variation norm based) regularization methods.
NASA Astrophysics Data System (ADS)
Zhan, Qin; Yuan, Yuan; Fan, Xiangtao; Huang, Jianyong; Xiong, Chunyang; Yuan, Fan
2016-06-01
Digital image correlation (DIC) is essentially implicated in a class of inverse problem. Here, a regularization scheme is developed for the subset-based DIC technique to effectively inhibit potential ill-posedness that likely arises in actual deformation calculations and hence enhance numerical stability, accuracy and precision of correlation measurement. With the aid of a parameterized two-dimensional Butterworth window, a regularized subpixel registration strategy is established, in which the amount of speckle information introduced to correlation calculations may be weighted through equivalent subset size constraint. The optimal regularization parameter associated with each individual sampling point is determined in a self-adaptive way by numerically investigating the curve of 2-norm condition number of coefficient matrix versus the corresponding equivalent subset size, based on which the regularized solution can eventually be obtained. Numerical results deriving from both synthetic speckle images and actual experimental images demonstrate the feasibility and effectiveness of the set of newly-proposed regularized DIC algorithms.
Guzzo, Massimiliano; Bernardi, Olga; Cardin, Franco
2007-09-01
We provide a new method for the localization of Aubry-Mather sets in quasi-integrable two-dimensional twist maps. Inspired by viscosity theories, we introduce regularization techniques based on the new concept of "relative viscosity and friction," which allows one to obtain regularized parametrizations of invariant sets with irrational rotation number. Such regularized parametrizations allow one to compute a curve in the phase-space that passes near the Aubry-Mather set, and an invariant measure whose density allows one to locate the gaps on the curve. We show applications to the "golden" cantorus of the standard map as well as to a more general case.
Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.
Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2013-01-01
A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.
Higher spin black holes in three dimensions: Remarks on asymptotics and regularity
NASA Astrophysics Data System (ADS)
Bañados, Máximo; Canto, Rodrigo; Theisen, Stefan
2016-07-01
In the context of (2 +1 )-dimensional S L (N ,R )×S L (N ,R ) Chern-Simons theory we explore issues related to regularity and asymptotics on the solid torus, for stationary and circularly symmetric solutions. We display and solve all necessary conditions to ensure a regular metric and metriclike higher spin fields. We prove that holonomy conditions are necessary but not sufficient conditions to ensure regularity, and that Hawking conditions do not necessarily follow from them. Finally we give a general proof that once the chemical potentials are turned on—as demanded by regularity—the asymptotics cannot be that of Brown-Henneaux.
Harmonic R matrices for scattering amplitudes and spectral regularization.
Ferro, Livia; Łukowski, Tomasz; Meneghelli, Carlo; Plefka, Jan; Staudacher, Matthias
2013-03-22
Planar N = 4 supersymmetric Yang-Mills theory appears to be integrable. While this allows one to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R matrix of the integrable N = 4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of nonintegrable four-dimensional field theories.
Graph Regularized Nonnegative Matrix Factorization for Data Representation.
Cai, Deng; He, Xiaofei; Han, Jiawei; Huang, Thomas S
2011-08-01
Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.
Regularized Partial Least Squares with an Application to NMR Spectroscopy
Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletić-Savatić, Mirjana
2014-01-01
High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexibility, general penalties, easy interpretation of results, and fast computation in high-dimensional settings. We also outline extensions of our methods leading to novel methods for non-negative PLS and generalized PLS, an adoption of PLS for structured data. We demonstrate the utility of our methods through simulations and a case study on proton Nuclear Magnetic Resonance (NMR) spectroscopy data. PMID:24511361
Regular Decompositions for H(div) Spaces
Kolev, Tzanio; Vassilevski, Panayot
2012-01-01
We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Regular membership. 725.3 Section 725.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person...
Continuum regularization of quantum field theory
Bern, Z.
1986-04-01
Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
Numerical Regularization of Ill-Posed Problems.
1980-07-09
Unione Matematica Italiana. 4. The parameter choice problem in linear regularization: a mathematical introduction, in "Ill-Posed Problems: Theory and...vector b which is generally unavailable (see [21], [22]). Kdckler [33] has shon however that in the case of Tikhonov regularization for matrices it may
Regularity Re-Revisited: Modality Matters
ERIC Educational Resources Information Center
Tsapkini, Kyrana; Jarema, Gonia; Kehayia, Eva
2004-01-01
The issue of regular-irregular past tense formation was examined in a cross-modal lexical decision task in Modern Greek, a language where the orthographic and phonological overlap between present and past tense stems is the same for both regular and irregular verbs. The experiment described here is a follow-up study of previous visual lexical…
Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A
Gao, J. M.; Liu, Y.; Li, W.; Lu, J.; Dong, Y. B.; Xia, Z. W.; Yi, P.; Yang, Q. W.
2013-09-15
An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.
Regularization techniques in realistic Laplacian computation.
Bortel, Radoslav; Sovka, Pavel
2007-11-01
This paper explores regularization options for the ill-posed spline coefficient equations in the realistic Laplacian computation. We investigate the use of the Tikhonov regularization, truncated singular value decomposition, and the so-called lambda-correction with the regularization parameter chosen by the L-curve, generalized cross-validation, quasi-optimality, and the discrepancy principle criteria. The provided range of regularization techniques is much wider than in the previous works. The improvement of the realistic Laplacian is investigated by simulations on the three-shell spherical head model. The conclusion is that the best performance is provided by the combination of the Tikhonov regularization and the generalized cross-validation criterion-a combination that has never been suggested for this task before.
A linear functional strategy for regularized ranking.
Kriukova, Galyna; Panasiuk, Oleksandra; Pereverzyev, Sergei V; Tkachenko, Pavlo
2016-01-01
Regularization schemes are frequently used for performing ranking tasks. This topic has been intensively studied in recent years. However, to be effective a regularization scheme should be equipped with a suitable strategy for choosing a regularization parameter. In the present study we discuss an approach, which is based on the idea of a linear combination of regularized rankers corresponding to different values of the regularization parameter. The coefficients of the linear combination are estimated by means of the so-called linear functional strategy. We provide a theoretical justification of the proposed approach and illustrate them by numerical experiments. Some of them are related with ranking the risk of nocturnal hypoglycemia of diabetes patients.
On regularizations of the Dirac delta distribution
NASA Astrophysics Data System (ADS)
Hosseini, Bamdad; Nigam, Nilima; Stockie, John M.
2016-01-01
In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions SH to a singular term S as a parameter H (associated with the support size of SH) shrinks to zero. We characterize this convergence in both the weak-* topology of distributions and a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.
Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts
Dixon, Lance
2002-03-08
We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg {yields} gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard {bar M}{bar S} coupling, {alpha}{sub s}. The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD {beta} function in the FDH and dimensional reduction schemes.
Regular black holes and noncommutative geometry inspired fuzzy sources
NASA Astrophysics Data System (ADS)
Kobayashi, Shinpei
2016-05-01
We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the space-time dimensions, and the existence of a void in the vicinity of the center of the space-time is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it.
Quantitative regularities in floodplain formation
NASA Astrophysics Data System (ADS)
Nevidimova, O.
2009-04-01
Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface
Manifold regularized non-negative matrix factorization with label information
NASA Astrophysics Data System (ADS)
Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin
2016-03-01
Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.
Interior Regularity Estimates in High Conductivity Homogenization and Application
NASA Astrophysics Data System (ADS)
Briane, Marc; Capdeboscq, Yves; Nguyen, Luc
2013-01-01
In this paper, uniform pointwise regularity estimates for the solutions of conductivity equations are obtained in a unit conductivity medium reinforced by an ɛ-periodic lattice of highly conducting thin rods. The estimates are derived only at a distance ɛ 1+ τ (for some τ > 0) away from the fibres. This distance constraint is rather sharp since the gradients of the solutions are shown to be unbounded locally in L p as soon as p > 2. One key ingredient is the derivation in dimension two of regularity estimates to the solutions of the equations deduced from a Fourier series expansion with respect to the fibres' direction, and weighted by the high-contrast conductivity. The dependence on powers of ɛ of these two-dimensional estimates is shown to be sharp. The initial motivation for this work comes from imaging, and enhanced resolution phenomena observed experimentally in the presence of micro-structures (L erosey et al., Science 315:1120-1124, 2007). We use these regularity estimates to characterize the signature of low volume fraction heterogeneities in the fibred reinforced medium, assuming that the heterogeneities stay at a distance ɛ 1+ τ away from the fibres.
Regularization of chaos by noise in electrically driven nanowire systems
NASA Astrophysics Data System (ADS)
Hessari, Peyman; Do, Younghae; Lai, Ying-Cheng; Chae, Junseok; Park, Cheol Woo; Lee, GyuWon
2014-04-01
The electrically driven nanowire systems are of great importance to nanoscience and engineering. Due to strong nonlinearity, chaos can arise, but in many applications it is desirable to suppress chaos. The intrinsically high-dimensional nature of the system prevents application of the conventional method of controlling chaos. Remarkably, we find that the phenomenon of coherence resonance, which has been well documented but for low-dimensional chaotic systems, can occur in the nanowire system that mathematically is described by two coupled nonlinear partial differential equations, subject to periodic driving and noise. Especially, we find that, when the nanowire is in either the weakly chaotic or the extensively chaotic regime, an optimal level of noise can significantly enhance the regularity of the oscillations. This result is robust because it holds regardless of whether noise is white or colored, and of whether the stochastic drivings in the two independent directions transverse to the nanowire are correlated or independent of each other. Noise can thus regularize chaotic oscillations through the mechanism of coherence resonance in the nanowire system. More generally, we posit that noise can provide a practical way to harness chaos in nanoscale systems.
Anomalies, Hawking radiations, and regularity in rotating black holes
Iso, Satoshi; Umetsu, Hiroshi; Wilczek, Frank
2006-08-15
This is an extended version of our previous letter [S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett. 96, 151302 (2006).]. In this paper we consider rotating black holes and show that the flux of Hawking radiation can be determined by anomaly cancellation conditions and regularity requirement at the horizon. By using a dimensional reduction technique, each partial wave of quantum fields in a d=4 rotating black hole background can be interpreted as a (1+1)-dimensional charged field with a charge proportional to the azimuthal angular momentum m. From this and the analysis [S. P. Robinson and F. Wilczek, Phys. Rev. Lett. 95, 011303 (2005), S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett. 96, 151302 (2006).] on Hawking radiation from charged black holes, we show that the total flux of Hawking radiation from rotating black holes can be universally determined in terms of the values of anomalies at the horizon by demanding gauge invariance and general coordinate covariance at the quantum level. We also clarify our choice of boundary conditions and show that our results are consistent with the effective action approach where regularity at the future horizon and vanishing of ingoing modes at r={infinity} are imposed (i.e. Unruh vacuum)
Functional MRI using regularized parallel imaging acquisition.
Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M; Belliveau, John W; Wald, Lawrence L; Kwong, Kenneth K
2005-08-01
Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions.
Functional MRI Using Regularized Parallel Imaging Acquisition
Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.
2013-01-01
Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694
Regularity detection by haptics and vision.
Cecchetto, Stefano; Lawson, Rebecca
2017-01-01
For vision, mirror-reflectional symmetry is usually easier to detect when it occurs within 1 object than when it occurs across 2 objects. The opposite pattern has been found for a different regularity, repetition. We investigated whether these results generalize to our sense of active touch (haptics). This was done to examine whether the interaction observed in vision results from intrinsic properties of the environment, or whether it is a consequence of how that environment is perceived and explored. In 4 regularity detection experiments, we haptically presented novel, planar shapes and then visually presented images of the same shapes. In addition to modality (haptics, vision), we varied regularity-type (symmetry, repetition), objectness (1, 2) and alignment of the axis of regularity with respect to the body midline (aligned, across). For both modalities, performance was better overall for symmetry than repetition. For vision, we replicated the previously reported regularity-type by objectness interaction for both stereoscopic and pictorial presentation, and for slanted and frontoparallel views. In contrast, for haptics, there was a 1-object advantage for repetition, as well as for symmetry when stimuli were explored with 1 hand, and no effect of objectness was found for 2-handed exploration. These results suggest that regularity is perceived differently in vision and in haptics, such that regularity detection does not just reflect modality-invariant, physical properties of our environment. (PsycINFO Database Record
The hypergraph regularity method and its applications
Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.
2005-01-01
Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Lagrangian averaging, nonlinear waves, and shock regularization
NASA Astrophysics Data System (ADS)
Bhat, Harish S.
In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity
Completeness and regularity of generalized fuzzy graphs.
Samanta, Sovan; Sarkar, Biswajit; Shin, Dongmin; Pal, Madhumangal
2016-01-01
Fuzzy graphs are the backbone of many real systems like networks, image, scheduling, etc. But, due to some restriction on edges, fuzzy graphs are limited to represent for some systems. Generalized fuzzy graphs are appropriate to avoid such restrictions. In this study generalized fuzzy graphs are introduced. In this study, matrix representation of generalized fuzzy graphs is described. Completeness and regularity are two important parameters of graph theory. Here, regular and complete generalized fuzzy graphs are introduced. Some properties of them are discussed. After that, effective regular graphs are exemplified.
Regular subalgebras of affine Kac Moody algebras
NASA Astrophysics Data System (ADS)
Felikson, Anna; Retakh, Alexander; Tumarkin, Pavel
2008-09-01
We classify regular subalgebras of Kac-Moody algebras in terms of their root systems. In the process, we establish that a root system of a subalgebra is always an intersection of the root system of the algebra with a sublattice of its root lattice. We also discuss applications to investigations of regular subalgebras of hyperbolic Kac-Moody algebras and conformally invariant subalgebras of affine Kac-Moody algebras. In particular, we provide explicit formulae for determining all Virasoro charges in coset constructions that involve regular subalgebras.
Bayesian Methods for High Dimensional Linear Models
Mallick, Himel; Yi, Nengjun
2013-01-01
In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow’s Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions. PMID:24511433
Analysis of regularized inversion of data corrupted by white Gaussian noise
NASA Astrophysics Data System (ADS)
Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli
2014-04-01
Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is \\begin{eqnarray*} m(x) = Au(x) + \\delta \\varepsilon (x), \\end{eqnarray*} where δ > 0 is the noise magnitude. If ɛ was an L2-function, Tikhonov regularization gives an estimate \\begin{eqnarray*} T_\\alpha (m) = \\mathop {{arg\\, min}}_{u\\in H^r} \\big \\lbrace \\Vert A u-m\\Vert _{L^2}^2+ \\alpha \\Vert u\\Vert _{H^r}^2 \\big \\rbrace \\end{eqnarray*} for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm \\Vert u\\Vert _{H^r} covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L2, but do belong to Hs with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed.
The L(1/2) regularization approach for survival analysis in the accelerated failure time model.
Chai, Hua; Liang, Yong; Liu, Xiao-Ying
2015-09-01
The analysis of high-dimensional and low-sample size microarray data for survival analysis of cancer patients is an important problem. It is a huge challenge to select the significantly relevant bio-marks from microarray gene expression datasets, in which the number of genes is far more than the size of samples. In this article, we develop a robust prediction approach for survival time of patient by a L(1/2) regularization estimator with the accelerated failure time (AFT) model. The L(1/2) regularization could be seen as a typical delegate of L(q)(0regularization methods and it has shown many attractive features. In order to optimize the problem of the relevant gene selection in high-dimensional biological data, we implemented the L(1/2) regularized AFT model by the coordinate descent algorithm with a renewed half thresholding operator. The results of the simulation experiment showed that we could obtain more accurate and sparse predictor for survival analysis by the L(1/2) regularized AFT model compared with other L1 type regularization methods. The proposed procedures are applied to five real DNA microarray datasets to efficiently predict the survival time of patient based on a set of clinical prognostic factors and gene signatures. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rosestolato, M.; Święch, A.
2017-02-01
We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.
Remarks on the global regularity criteria for the 3D MHD equations via two components
NASA Astrophysics Data System (ADS)
Zhang, Zujin
2015-06-01
In this paper, we would like to improve the recent results of Yamazaki on the regularity criteria for the three-dimensional magentohydrodynamic equations (Yamazaki in J Math Phys 55:031505, 2014; J Math Fluid Mech. 2014. doi:10.1007/s00021-014-0178-1).
Regular Sleep Makes for Happier College Students
... https://medlineplus.gov/news/fullstory_166856.html Regular Sleep Makes for Happier College Students When erratic snoozers ... studying and socializing, college students often have crazy sleep schedules, and new research suggests that a lack ...
[Serum ferritin in donors with regular plateletpheresis].
Ma, Chun-Hui; Guo, Ru-Hua; Wu, Wei-Jian; Yan, Jun-Xiong; Yu, Jin-Lin; Zhu, Ye-Hua; He, Qi-Tong; Luo, Yi-Hong; Huang, Lu; Ye, Rui-Yun
2011-04-01
This study was aimed to evaluate the impact of regular donating platelets on serum ferritin (SF) of donors. A total of 93 male blood donors including 24 initial plateletpheresis donors and 69 regular plateletpheresis donors were selected randomly. Their SF level was measured by ELISA. The results showed that the SF level of initial plateletpheresis donors and regular plateletpheresis donors were 91.08 ± 23.38 µg/L and 57.16 ± 35.48 µg/L respectively, and all were in normal levels, but there was significant difference between the 2 groups (p < 0.05). The SF level decreased when the donation frequency increased, there were no significant differences between the groups with different donation frequency. Correlation with lifetime donations of platelets was not found. It is concluded that regular plateletpheresis donors may have lower SF level.
Epigenetic adaptation to regular exercise in humans.
Ling, Charlotte; Rönn, Tina
2014-07-01
Regular exercise has numerous health benefits, for example, it reduces the risk of cardiovascular disease and cancer. It has also been shown that the risk of type 2 diabetes can be halved in high-risk groups through nonpharmacological lifestyle interventions involving exercise and diet. Nevertheless, the number of people living a sedentary life is dramatically increasing worldwide. Researchers have searched for molecular mechanisms explaining the health benefits of regular exercise for decades and it is well established that exercise alters the gene expression pattern in multiple tissues. However, until recently it was unknown that regular exercise can modify the genome-wide DNA methylation pattern in humans. This review will focus on recent progress in the field of regular exercise and epigenetics.
The Volume of the Regular Octahedron
ERIC Educational Resources Information Center
Trigg, Charles W.
1974-01-01
Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)
On a class of coedge regular graphs
NASA Astrophysics Data System (ADS)
Makhnev, A. A.; Paduchikh, D. V.
2005-12-01
We study graphs in which \\lambda(a,b)=\\lambda_1,\\lambda_2 for every edge \\{a,b\\} and all \\mu-subgraphs are 2-cocliques. We give a description of connected edge-regular graphs for k\\ge (b_1^2+3b_1-4)/2. In particular, the following examples confirm that the inequality k>b_1(b_1+3)/2 is a sharp bound for strong regularity: the n-gon, the icosahedron graph, the graph in \\mathrm{MP}(6) and the distance-regular graph of diameter 4 with intersection massive \\{x,x-1,4,1;1,2,x-1,x\\}, which is an antipodal 3-covering of the strongly regular graph with parameters ((x+2)(x+3)/6,x,0,6).
Probabilistic regularization in inverse optical imaging.
De Micheli, E; Viano, G A
2000-11-01
The problem of object restoration in the case of spatially incoherent illumination is considered. A regularized solution to the inverse problem is obtained through a probabilistic approach, and a numerical algorithm based on the statistical analysis of the noisy data is presented. Particular emphasis is placed on the question of the positivity constraint, which is incorporated into the probabilistically regularized solution by means of a quadratic programming technique. Numerical examples illustrating the main steps of the algorithm are also given.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Usual Source of Care in Preventive Service Use: A Regular Doctor versus a Regular Site
Xu, K Tom
2002-01-01
Objective To compare the effects of having a regular doctor and having a regular site on five preventive services, controlling for the endogeneity of having a usual source of care. Data Source The Medical Expenditure Panel Survey 1996 conducted by the Agency for Healthcare Research and Quality and the National Center for Health Statistics. Study Design Mammograms, pap smears, blood pressure checkups, cholesterol level checkups, and flu shots were examined. A modified behavioral model framework was presented, which controlled for the endogeneity of having a usual source of care. Based on this framework, a two-equation empirical model was established to predict the probabilities of having a regular doctor and having a regular site, and use of each type of preventive service. Principal Findings Having a regular doctor was found to have a greater impact than having a regular site on discretional preventive services, such as blood pressure and cholesterol level checkups. No statistically significant differences were found between the effects a having a regular doctor and having a regular site on the use of flu shots, pap smears, and mammograms. Among the five preventive services, having a usual source of care had the greatest impact on cholesterol level checkups and pap smears. Conclusions Promoting a stable physician–patient relationship can improve patients’ timely receipt of clinical prevention. For certain preventive services, having a regular doctor is more effective than having a regular site. PMID:12546284
Spectral analysis of two-dimensional Bose-Hubbard models
NASA Astrophysics Data System (ADS)
Fischer, David; Hoffmann, Darius; Wimberger, Sandro
2016-04-01
One-dimensional Bose-Hubbard models are well known to obey a transition from regular to quantum-chaotic spectral statistics. We are extending this concept to relatively simple two-dimensional many-body models. Also in two dimensions a transition from regular to chaotic spectral statistics is found and discussed. In particular, we analyze the dependence of the spectral properties on the bond number of the two-dimensional lattices and the applied boundary conditions. For maximal connectivity, the systems behave most regularly in agreement with the applicability of mean-field approaches in the limit of many nearest-neighbor couplings at each site.
On minimal energy dipole moment distributions in regular polygonal agglomerates
NASA Astrophysics Data System (ADS)
Rosa, Adriano Possebon; Cunha, Francisco Ricardo; Ceniceros, Hector Daniel
2017-01-01
Static, regular polygonal and close-packed clusters of spherical magnetic particles and their energy-minimizing magnetic moments are investigated in a two-dimensional setting. This study focuses on a simple particle system which is solely described by the dipole-dipole interaction energy, both without and in the presence of an in-plane magnetic field. For a regular polygonal structure of n sides with n ≥ 3 , and in the absence of an external field, it is proved rigorously that the magnetic moments given by the roots of unity, i.e. tangential to the polygon, are a minimizer of the dipole-dipole interaction energy. Also, for zero external field, new multiple local minima are discovered for the regular polygonal agglomerates. The number of found local extrema is proportional to [ n / 2 ] and these critical points are characterized by the presence of a pair of magnetic moments with a large deviation from the tangential configuration and whose particles are at least three diameters apart. The changes induced by an in-plane external magnetic field on the minimal energy, tangential configurations are investigated numerically. The two critical fields, which correspond to a crossover with the linear chain minimal energy and with the break-up of the agglomerate, respectively are examined in detail. In particular, the numerical results are compared directly with the asymptotic formulas of Danilov et al. (2012) [23] and a remarkable agreement is found even for moderate to large fields. Finally, three examples of close-packed structures are investigated: a triangle, a centered hexagon, and a 19-particle close packed cluster. The numerical study reveals novel, illuminating characteristics of these compact clusters often seen in ferrofluids. The centered hexagon is energetically favorable to the regular hexagon and the minimal energy for the larger 19-particle cluster is even lower than that of the close packed hexagon. In addition, this larger close packed agglomerate has two
Assessment of regularization techniques for electrocardiographic imaging
Milanič, Matija; Jazbinšek, Vojko; MacLeod, Robert S.; Brooks, Dana H.; Hren, Rok
2014-01-01
A widely used approach to solving the inverse problem in electrocardiography involves computing potentials on the epicardium from measured electrocardiograms (ECGs) on the torso surface. The main challenge of solving this electrocardiographic imaging (ECGI) problem lies in its intrinsic ill-posedness. While many regularization techniques have been developed to control wild oscillations of the solution, the choice of proper regularization methods for obtaining clinically acceptable solutions is still a subject of ongoing research. However there has been little rigorous comparison across methods proposed by different groups. This study systematically compared various regularization techniques for solving the ECGI problem under a unified simulation framework, consisting of both 1) progressively more complex idealized source models (from single dipole to triplet of dipoles), and 2) an electrolytic human torso tank containing a live canine heart, with the cardiac source being modeled by potentials measured on a cylindrical cage placed around the heart. We tested 13 different regularization techniques to solve the inverse problem of recovering epicardial potentials, and found that non-quadratic methods (total variation algorithms) and first-order and second-order Tikhonov regularizations outperformed other methodologies and resulted in similar average reconstruction errors. PMID:24369741
Modified sparse regularization for electrical impedance tomography
Fan, Wenru Xue, Qian; Wang, Huaxiang; Cui, Ziqiang; Sun, Benyuan; Wang, Qi
2016-03-15
Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L{sub 1} norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.
Regularity re-revisited: modality matters.
Tsapkini, Kyrana; Jarema, Gonia; Kehayia, Eva
2004-06-01
The issue of regular-irregular past tense formation was examined in a cross-modal lexical decision task in Modern Greek, a language where the orthographic and phonological overlap between present and past tense stems is the same for both regular and irregular verbs. The experiment described here is a follow-up study of previous visual lexical decision experiments (Tsapkini, Kehayia, & Harema, 2002) that also addressed the regular-irregular distinction in Greek. In the present experiment, we investigated the effect of input modality in lexical processing and compared different types of regular and irregular verbs. In contrast to our previous intra-modal (visual-visual) priming experiments, in this cross-modal (auditory-visual) priming study, we found that regular verbs with an orthographically salient morphemic aspectual marker elicited the same facilitation as those without an orthographically salient marker. However, irregular verbs did not exhibit a different priming pattern with respect to modality. We interpret these results in the framework of a two-level lexical processing approach with modality-specific access representations at a surface level and modality-independent morphemic representations at a deeper level.
Shadow of rotating regular black holes
NASA Astrophysics Data System (ADS)
Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.
2016-05-01
We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.
Strong regularizing effect of integrable systems
Zhou, Xin
1997-11-01
Many time evolution problems have the so-called strong regularization effect, that is, with any irregular initial data, as soon as becomes greater than 0, the solution becomes C{sup {infinity}} for both spacial and temporal variables. This paper studies 1 x 1 dimension integrable systems for such regularizing effect. In the work by Sachs, Kappler [S][K], (see also earlier works [KFJ] and [Ka]), strong regularizing effect is proved for KdV with rapidly decaying irregular initial data, using the inverse scattering method. There are two equivalent Gel`fand-Levitan-Marchenko (GLM) equations associated to an inverse scattering problem, one is normalized at x = {infinity} and another at x = {infinity}. The method of [S][K] relies on the fact that the KdV waves propagate only in one direction and therefore one of the two GLM equations remains normalized and can be differentiated infinitely many times. 15 refs.
Nonlinear electrodynamics and regular black holes
NASA Astrophysics Data System (ADS)
Sajadi, S. N.; Riazi, N.
2017-03-01
In this work, an exact regular black hole solution in General Relativity is presented. The source is a nonlinear electromagnetic field with the algebraic structure T00=T11 for the energy-momentum tensor, partially satisfying the weak energy condition but not the strong energy condition. In the weak field limit, the EM field behaves like the Maxwell field. The solution corresponds to a charged black hole with q≤0.77 m. The metric, the curvature invariants, and the electric field are regular everywhere. The BH is stable against small perturbations of spacetime and using the Weinhold metric, geometrothermodynamical stability has been investigated. Finally we investigate the idea that the observable universe lives inside a regular black hole. We argue that this picture might provide a viable description of universe.
Analytic regularization in Soft-Collinear Effective Theory
NASA Astrophysics Data System (ADS)
Becher, Thomas; Bell, Guido
2012-06-01
In high-energy processes which are sensitive to small transverse momenta, individual contributions from collinear and soft momentum regions are not separately well-defined in dimensional regularization. A simple possibility to solve this problem is to introduce additional analytic regulators. We point out that in massless theories the unregularized singularities only appear in real-emission diagrams and that the additional regulators can be introduced in such a way that gauge invariance and the factorized eikonal structure of soft and collinear emissions is maintained. This simplifies factorization proofs and implies, at least in the massless case, that the structure of Soft-Collinear Effective Theory remains completely unchanged by the presence of the additional regulators. Our formalism also provides a simple operator definition of transverse parton distribution functions.
Baseline Regularization for Computational Drug Repositioning with Longitudinal Observational Data
Kuang, Zhaobin; Thomson, James; Caldwell, Michael; Peissig, Peggy; Stewart, Ron; Page, David
2016-01-01
Computational Drug Repositioning (CDR) is the knowledge discovery process of finding new indications for existing drugs leveraging heterogeneous drug-related data. Longitudinal observational data such as Electronic Health Records (EHRs) have become an emerging data source for CDR. To address the high-dimensional, irregular, subject and time-heterogeneous nature of EHRs, we propose Baseline Regularization (BR) and a variant that extend the one-way fixed effect model, which is a standard approach to analyze small-scale longitudinal data. For evaluation, we use the proposed methods to search for drugs that can lower Fasting Blood Glucose (FBG) level in the Marshfield Clinic EHR. Experimental results suggest that the proposed methods are capable of rediscovering drugs that can lower FBG level as well as identifying some potential blood sugar lowering drugs in the literature.
Regularity for steady periodic capillary water waves with vorticity.
Henry, David
2012-04-13
In the following, we prove new regularity results for two-dimensional steady periodic capillary water waves with vorticity, in the absence of stagnation points. Firstly, we prove that if the vorticity function has a Hölder-continuous first derivative, then the free surface is a smooth curve and the streamlines beneath the surface will be real analytic. Furthermore, once we assume that the vorticity function is real analytic, it will follow that the wave surface profile is itself also analytic. A particular case of this result includes irrotational fluid flow where the vorticity is zero. The property of the streamlines being analytic allows us to gain physical insight into small-amplitude waves by justifying a power-series approach.
Uncorrelated regularized local Fisher discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Wang, Zhan; Ruan, Qiuqi; An, Gaoyun
2014-07-01
A local Fisher discriminant analysis can work well for a multimodal problem. However, it often suffers from the undersampled problem, which makes the local within-class scatter matrix singular. We develop a supervised discriminant analysis technique called uncorrelated regularized local Fisher discriminant analysis for image feature extraction. In this technique, the local within-class scatter matrix is approximated by a full-rank matrix that not only solves the undersampled problem but also eliminates the poor impact of small and zero eigenvalues. Statistically uncorrelated features are obtained to remove redundancy. A trace ratio criterion and the corresponding iterative algorithm are employed to globally solve the objective function. Experimental results on four famous face databases indicate that our proposed method is effective and outperforms the conventional dimensionality reduction methods.
Analytic regularization of uniform cubic B-spline deformation fields.
Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C
2012-01-01
Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.
Demosaicing as the problem of regularization
NASA Astrophysics Data System (ADS)
Kunina, Irina; Volkov, Aleksey; Gladilin, Sergey; Nikolaev, Dmitry
2015-12-01
Demosaicing is the process of reconstruction of a full-color image from Bayer mosaic, which is used in digital cameras for image formation. This problem is usually considered as an interpolation problem. In this paper, we propose to consider the demosaicing problem as a problem of solving an underdetermined system of algebraic equations using regularization methods. We consider regularization with standard l1/2-, l1 -, l2- norms and their effect on quality image reconstruction. The experimental results showed that the proposed technique can both be used in existing methods and become the base for new ones
Existence of constants in regular splicing languages.
Bonizzoni, Paola; Jonoska, Nataša
2015-06-01
In spite of wide investigations of finite splicing systems in formal language theory, basic questions, such as their characterization, remain unsolved. It has been conjectured that a necessary condition for a regular language L to be a splicing language is that L must have a constant in the Schutzenberger sense. We prove this longstanding conjecture to be true. The result is based on properties of strongly connected components of the minimal deterministic finite state automaton for a regular splicing language. Using constants of the corresponding languages, we also provide properties of transitive automata and pathautomata.
Existence of constants in regular splicing languages
Jonoska, Nataša
2015-01-01
In spite of wide investigations of finite splicing systems in formal language theory, basic questions, such as their characterization, remain unsolved. It has been conjectured that a necessary condition for a regular language L to be a splicing language is that L must have a constant in the Schutzenberger sense. We prove this longstanding conjecture to be true. The result is based on properties of strongly connected components of the minimal deterministic finite state automaton for a regular splicing language. Using constants of the corresponding languages, we also provide properties of transitive automata and pathautomata. PMID:27185985
Generalised hyperbolicity in spacetimes with Lipschitz regularity
NASA Astrophysics Data System (ADS)
Sanchez Sanchez, Yafet; Vickers, James A.
2017-02-01
In this paper we obtain general conditions under which the wave equation is well-posed in spacetimes with metrics of Lipschitz regularity. In particular, the results can be applied to spacetimes where there is a loss of regularity on a hypersurface such as shell-crossing singularities, thin shells of matter, and surface layers. This provides a framework for regarding gravitational singularities not as obstructions to the world lines of point-particles, but rather as obstruction to the dynamics of test fields.
Regularization and the potential of effective field theory in nucleon-nucleon scattering
Phillips, D.R.
1998-04-01
This paper examines the role that regularization plays in the definition of the potential used in effective field theory (EFT) treatments of the nucleon-nucleon interaction. The author considers N N scattering in S-wave channels at momenta well below the pion mass. In these channels (quasi-)bound states are present at energies well below the scale m{sub {pi}}{sup 2}/M expected from naturalness arguments. He asks whether, in the presence of such a shallow bound state, there is a regularization scheme which leads to an EFT potential that is both useful and systematic. In general, if a low-lying bound state is present then cutoff regularization leads to an EFT potential which is useful but not systematic, and dimensional regularization with minimal subtraction leads to one which is systematic but not useful. The recently-proposed technique of dimensional regularization with power-law divergence subtraction allows the definition of an EFT potential which is both useful and systematic.
Regular homotopy for immersions of graphs into surfaces
NASA Astrophysics Data System (ADS)
Permyakov, D. A.
2016-06-01
We study invariants of regular immersions of graphs into surfaces up to regular homotopy. The concept of the winding number is used to introduce a new simple combinatorial invariant of regular homotopy. Bibliography: 20 titles.
Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.
Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam
2017-09-01
Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.
Regularizing the divergent structure of light-front currents
Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong
2001-04-01
The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.
COLLIER: A fortran-based complex one-loop library in extended regularizations
NASA Astrophysics Data System (ADS)
Denner, Ansgar; Dittmaier, Stefan; Hofer, Lars
2017-03-01
We present the library COLLIER for the numerical evaluation of one-loop scalar and tensor integrals in perturbative relativistic quantum field theories. The code provides numerical results for arbitrary tensor and scalar integrals for scattering processes in general quantum field theories. For tensor integrals either the coefficients in a covariant decomposition or the tensor components themselves are provided. COLLIER supports complex masses, which are needed in calculations involving unstable particles. Ultraviolet and infrared singularities are treated in dimensional regularization. For soft and collinear singularities mass regularization is available as an alternative.
A Quantitative Measure of Memory Reference Regularity
Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A
2001-10-01
The memory performance of applications on existing architectures depends significantly on hardware features like prefetching and caching that exploit the locality of the memory accesses. The principle of locality has guided the design of many key micro-architectural features, including cache hierarchies, TLBs, and branch predictors. Quantitative measures of spatial and temporal locality have been useful for predicting the performance of memory hierarchy components. Unfortunately, the concept of locality is constrained to capturing memory access patterns characterized by proximity, while sophisticated memory systems are capable of exploiting other predictable access patterns. Here, we define the concepts of spatial and temporal regularity, and introduce a measure of spatial access regularity to quantify some of this predictability in access patterns. We present an efficient online algorithm to dynamically determine the spatial access regularity in an application's memory references, and demonstrate its use on a set of regular and irregular codes. We find that the use of our algorithm, with its associated overhead of trace generation, slows typical applications by a factor of 50-200, which is at least an order of magnitude better than traditional full trace generation approaches. Our approach can be applied to the characterization of program access patterns and in the implementation of sophisticated, software-assisted prefetching mechanisms, and its inherently parallel nature makes it well suited for use with multi-threaded programs.
Strategies of Teachers in the Regular Classroom
ERIC Educational Resources Information Center
De Leeuw, Renske Ria; De Boer, Anke Aaltje
2016-01-01
It is known that regular schoolteachers have difficulties in educating students with social, emotional and behavioral difficulties (SEBD), mainly because of their disruptive behavior. In order to manage the disruptive behavior of students with SEBD many advices and strategies are provided in educational literature. However, very little is known…
Regular Gleason Measures and Generalized Effect Algebras
NASA Astrophysics Data System (ADS)
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regular Polygons with Rational Area or Perimeter.
ERIC Educational Resources Information Center
Killgrove, R. B.; Koster, D. W.
1991-01-01
Discussed are two approaches to determining which regular polygons, either inscribed within or circumscribed about the unit circle, exhibit rational area or rational perimeter. One approach involves applications of abstract theory from a typical modern algebra course, whereas the other approach employs material from a traditional…
Regularization of turbulence - a comprehensive modeling approach
NASA Astrophysics Data System (ADS)
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Starting flow in regular polygonal ducts
NASA Astrophysics Data System (ADS)
Wang, C. Y.
2016-06-01
The starting flows in regular polygonal ducts of S = 3, 4, 5, 6, 8 sides are determined by the method of eigenfunction superposition. The necessary S-fold symmetric eigenfunctions and eigenvalues of the Helmholtz equation are found either exactly or by boundary point match. The results show the starting time is governed by the first eigenvalue.
Regularity Aspects in Inverse Musculoskeletal Biomechanics
NASA Astrophysics Data System (ADS)
Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten
2008-09-01
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.
On the regularity in some variational problems
NASA Astrophysics Data System (ADS)
Ragusa, Maria Alessandra; Tachikawa, Atsushi
2017-01-01
Our main goal is the study some regularity results where are considered estimates in Morrey spaces for the derivatives of local minimizers of variational integrals of the form 𝒜 (u ,Ω )= ∫Ω F (x ,u ,D u ) dx where Ω is a bounded domain in ℝm and the integrand F have some different forms.
Effective Special Education in Regular Classes.
ERIC Educational Resources Information Center
Wang, Margaret C.; Birch, Jack W.
1984-01-01
A study of 156 K-3 classrooms revealed that the Adaptive Learning Enviornments Model, an educational approach that accommodates, in regular classes, a wider-than-usual range of individual differences, can be implemented effectively in a variety of settings, and that favorable student outcome measures coincide with high degrees of program…
Fast Image Reconstruction with L2-Regularization
Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar
2014-01-01
Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184
Semantic Gender Assignment Regularities in German
ERIC Educational Resources Information Center
Schwichtenberg, Beate; Schiller, Niels O.
2004-01-01
Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution…
Regularizing cosmological singularities by varying physical constants
Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl
2013-02-01
Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.
Dyslexia in Regular Orthographies: Manifestation and Causation
ERIC Educational Resources Information Center
Wimmer, Heinz; Schurz, Matthias
2010-01-01
This article summarizes our research on the manifestation of dyslexia in German and on cognitive deficits, which may account for the severe reading speed deficit and the poor orthographic spelling performance that characterize dyslexia in regular orthographies. An only limited causal role of phonological deficits (phonological awareness,…
Generalisation of Regular and Irregular Morphological Patterns.
ERIC Educational Resources Information Center
Prasada, Sandeep; and Pinker, Steven
1993-01-01
When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call…
Regular Nonchaotic Attractors with Positive Plural
NASA Astrophysics Data System (ADS)
Zhang, Xu
2016-12-01
The study of the strange nonchaotic attractors is an interesting topic, where the dynamics are neither regular nor chaotic (the word chaotic means the positive Lyapunov exponents), and the shape of the attractors has complicated geometry structure, or fractal structure. It is found that in a class of planar first-order nonautonomous systems, it is possible that there exist attractors, where the shape of the attractors is regular, the orbits are transitive on the attractors, and the dynamics are not chaotic. We call this type of attractors as regular nonchaotic attractors with positive plural, which are different from the strange nonchaotic attractors, attracting fixed points, or attracting periodic orbits. Several examples with computer simulations are given. The first two examples have annulus-shaped attractors. Another two examples have disk-shaped attractors. The last two examples with externally driven terms at two incommensurate frequencies have regular nonchaotic attractors with positive plural, implying that the existence of externally driven terms at two incommensurate frequencies might not be the sufficient condition to guarantee that the system has strange nonchaotic attractors.
Exploring the structural regularities in networks.
Shen, Hua-Wei; Cheng, Xue-Qi; Guo, Jia-Feng
2011-11-01
In this paper, we consider the problem of exploring structural regularities of networks by dividing the nodes of a network into groups such that the members of each group have similar patterns of connections to other groups. Specifically, we propose a general statistical model to describe network structure. In this model, a group is viewed as a hidden or unobserved quantity and it is learned by fitting the observed network data using the expectation-maximization algorithm. Compared with existing models, the most prominent strength of our model is the high flexibility. This strength enables it to possess the advantages of existing models and to overcome their shortcomings in a unified way. As a result, not only can broad types of structure be detected without prior knowledge of the type of intrinsic regularities existing in the target network, but also the type of identified structure can be directly learned from the network. Moreover, by differentiating outgoing edges from incoming edges, our model can detect several types of structural regularities beyond competing models. Tests on a number of real world and artificial networks demonstrate that our model outperforms the state-of-the-art model in shedding light on the structural regularities of networks, including the overlapping community structure, multipartite structure, and several other types of structure, which are beyond the capability of existing models.
Prox-regular functions in Hilbert spaces
NASA Astrophysics Data System (ADS)
Bernard, Frédéric; Thibault, Lionel
2005-03-01
This paper studies the prox-regularity concept for functions in the general context of Hilbert space. In particular, a subdifferential characterization is established as well as several other properties. It is also shown that the Moreau envelopes of such functions are continuously differentiable.
Regularities in Spearman's Law of Diminishing Returns.
ERIC Educational Resources Information Center
Jensen, Arthur R.
2003-01-01
Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…
Regularities in Spearman's Law of Diminishing Returns.
ERIC Educational Resources Information Center
Jensen, Arthur R.
2003-01-01
Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…
TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION
MEERSCHAERT, M. M.; SCHEFFLER, H.-P.
2013-01-01
Karamata’s Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated. PMID:24644367
Strategies of Teachers in the Regular Classroom
ERIC Educational Resources Information Center
De Leeuw, Renske Ria; De Boer, Anke Aaltje
2016-01-01
It is known that regular schoolteachers have difficulties in educating students with social, emotional and behavioral difficulties (SEBD), mainly because of their disruptive behavior. In order to manage the disruptive behavior of students with SEBD many advices and strategies are provided in educational literature. However, very little is known…
Regularity of rotational travelling water waves.
Escher, Joachim
2012-04-13
Several recent results on the regularity of streamlines beneath a rotational travelling wave, along with the wave profile itself, will be discussed. The survey includes the classical water wave problem in both finite and infinite depth, capillary waves and solitary waves as well. A common assumption in all models to be discussed is the absence of stagnation points.
Sparse High Dimensional Models in Economics
Fan, Jianqing; Lv, Jinchi; Qi, Lei
2010-01-01
This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1992-01-01
Modified vernier scale gives accurate two-dimensional coordinates from maps, drawings, or cathode-ray-tube displays. Movable circular overlay rests on fixed rectangular-grid overlay. Pitch of circles nine-tenths that of grid and, for greatest accuracy, radii of circles large compared with pitch of grid. Scale enables user to interpolate between finest divisions of regularly spaced rule simply by observing which mark on auxiliary vernier rule aligns with mark on primary rule.
Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.
Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2016-06-08
Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
Tracking magnetogram proper motions by multiscale regularization
NASA Technical Reports Server (NTRS)
Jones, Harrison P.
1995-01-01
Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.
Variational regularized 2-D nonnegative matrix factorization.
Gao, Bin; Woo, W L; Dlay, S S
2012-05-01
A novel approach for adaptive regularization of 2-D nonnegative matrix factorization is presented. The proposed matrix factorization is developed under the framework of maximum a posteriori probability and is adaptively fine-tuned using the variational approach. The method enables: (1) a generalized criterion for variable sparseness to be imposed onto the solution; and (2) prior information to be explicitly incorporated into the basis features. The method is computationally efficient and has been demonstrated on two applications, that is, extracting features from image and separating single channel source mixture. In addition, it is shown that the basis features of an information-bearing matrix can be extracted more efficiently using the proposed regularized priors. Experimental tests have been rigorously conducted to verify the efficacy of the proposed method.
Regularity of nuclear structure under random interactions
Zhao, Y. M.
2011-05-06
In this contribution I present a brief introduction to simplicity out of complexity in nuclear structure, specifically, the regularity of nuclear structure under random interactions. I exemplify such simplicity by two examples: spin-zero ground state dominance and positive parity ground state dominance in even-even nuclei. Then I discuss two recent results of nuclear structure in the presence of random interactions, in collaboration with Prof. Arima. Firstly I discuss sd bosons under random interactions, with the focus on excited states in the yrast band. We find a few regular patterns in these excited levels. Secondly I discuss our recent efforts towards obtaining eigenvalues without diagonalizing the full matrices of the nuclear shell model Hamiltonian.
Effort variation regularization in sound field reproduction.
Stefanakis, Nick; Jacobsen, Finn; Sarris, John
2010-08-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths. Specifically, it is suggested that the phase differential of the source driving signals should be in agreement with the phase differential of the desired sound pressure field. The performance of the suggested method is compared with that of conventional effort regularization, wave field synthesis (WFS), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy in the listening room.
Modeling Regular Replacement for String Constraint Solving
NASA Technical Reports Server (NTRS)
Fu, Xiang; Li, Chung-Chih
2010-01-01
Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications
Symmetries and regular behavior of Hamiltonian systems.
Kozlov, Valeriy V.
1996-03-01
The behavior of the phase trajectories of the Hamilton equations is commonly classified as regular and chaotic. Regularity is usually related to the condition for complete integrability, i.e., a Hamiltonian system with n degrees of freedom has n independent integrals in involution. If at the same time the simultaneous integral manifolds are compact, the solutions of the Hamilton equations are quasiperiodic. In particular, the entropy of the Hamiltonian phase flow of a completely integrable system is zero. It is found that there is a broader class of Hamiltonian systems that do not show signs of chaotic behavior. These are systems that allow n commuting "Lagrangian" vector fields, i.e., the symplectic 2-form on each pair of such fields is zero. They include, in particular, Hamiltonian systems with multivalued integrals. (c) 1996 American Institute of Physics.
Power-law regularities in human language
NASA Astrophysics Data System (ADS)
Mehri, Ali; Lashkari, Sahar Mohammadpour
2016-11-01
Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.
Charged fermions tunneling from regular black holes
Sharif, M. Javed, W.
2012-11-15
We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.
Regular Magnetic Black Hole Gravitational Lensing
NASA Astrophysics Data System (ADS)
Liang, Jun
2017-05-01
The Bronnikov regular magnetic black hole as a gravitational lens is studied. In nonlinear electrodynamics, photons do not follow null geodesics of background geometry, but move along null geodesics of a corresponding effective geometry. To study the Bronnikov regular magnetic black hole gravitational lensing in the strong deflection limit, the corresponding effective geometry should be obtained firstly. This is the most important and key step. We obtain the deflection angle in the strong deflection limit, and further calculate the angular positions and magnifications of relativistic images as well as the time delay between different relativistic images. The influence of the magnetic charge on the black hole gravitational lensing is also discussed. Supported by the Natural Science Foundation of Education Department of Shannxi Province under Grant No 15JK1077, and the Doctorial Scientific Research Starting Fund of Shannxi University of Science and Technology under Grant No BJ12-02.
Superfast Tikhonov Regularization of Toeplitz Systems
NASA Astrophysics Data System (ADS)
Turnes, Christopher K.; Balcan, Doru; Romberg, Justin
2014-08-01
Toeplitz-structured linear systems arise often in practical engineering problems. Correspondingly, a number of algorithms have been developed that exploit Toeplitz structure to gain computational efficiency when solving these systems. The earliest "fast" algorithms for Toeplitz systems required O(n^2) operations, while more recent "superfast" algorithms reduce the cost to O(n (log n)^2) or below. In this work, we present a superfast algorithm for Tikhonov regularization of Toeplitz systems. Using an "extension-and-transformation" technique, our algorithm translates a Tikhonov-regularized Toeplitz system into a type of specialized polynomial problem known as tangential interpolation. Under this formulation, we can compute the solution in only O(n (log n)^2) operations. We use numerical simulations to demonstrate our algorithm's complexity and verify that it returns stable solutions.
3D Gravity Inversion using Tikhonov Regularization
NASA Astrophysics Data System (ADS)
Toushmalani, Reza; Saibi, Hakim
2015-08-01
Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.
Speech enhancement using local spectral regularization
NASA Astrophysics Data System (ADS)
Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly; Diaz, Arnoldo
2016-09-01
A locally-adaptive algorithm for speech enhancement based on local spectral regularization is presented. The algorithm is able to retrieve a clean speech signal from a noisy signal using locally-adaptive signal processing. The proposed algorithm is able to increase the quality of a noisy signal in terms of objective metrics. Computer simulation results obtained with the proposed algorithm are presented and discussed in processing speech signals corrupted with additive noise.
A regularization approach to hydrofacies delineation
Wohlberg, Brendt; Tartakovsky, Daniel
2009-01-01
We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.
Spectra of sparse regular graphs with loops.
Metz, F L; Neri, I; Bollé, D
2011-11-01
We derive exact equations that determine the spectra of undirected and directed sparsely connected regular graphs containing loops of arbitrary lengths. The implications of our results for the structural and dynamical properties of network models are discussed by showing how loops influence the size of the spectral gap and the propensity for synchronization. Analytical formulas for the spectrum are obtained for specific lengths of the loops.
Bouncing cosmology inspired by regular black holes
NASA Astrophysics Data System (ADS)
Neves, J. C. S.
2017-09-01
In this article, we present a bouncing cosmology inspired by a family of regular black holes. This scale-dependent cosmology deviates from the cosmological principle by means of a scale factor which depends on the time and the radial coordinate as well. The model is isotropic but not perfectly homogeneous. That is, this cosmology describes a universe almost homogeneous only for large scales, such as our observable universe.
Sparse regularization for force identification using dictionaries
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
Delles, Michael; Rengier, Fabian; Ley, Sebastian; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland
2011-01-01
In cardiovascular diagnostics, phase-contrast MRI is a valuable technique for measuring blood flow velocities and computing blood pressure values. Unfortunately, both velocity and pressure data typically suffer from the strong image noise of velocity-encoded MRI. In the past, separate approaches of regularization with physical a-priori knowledge and data representation with continuous functions have been proposed to overcome these drawbacks. In this article, we investigate polynomial regularization as an exemplary specification of combining these two techniques. We perform time-resolved three-dimensional velocity measurements and pressure gradient computations on MRI acquisitions of steady flow in a physical phantom. Results based on the higher quality temporal mean data are used as a reference. Thereby, we investigate the performance of our approach of polynomial regularization, which reduces the root mean squared errors to the reference data by 45% for velocities and 60% for pressure gradients.
Regularization Parameter Selections via Generalized Information Criterion
Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling
2009-01-01
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354
Regularity and chaos in cavity QED
NASA Astrophysics Data System (ADS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G.
2017-05-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones.
Guaranteed classification via regularized similarity learning.
Guo, Zheng-Chu; Ying, Yiming
2014-03-01
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban (2012). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.
Automatic detection of regularly repeating vocalizations
NASA Astrophysics Data System (ADS)
Mellinger, David
2005-09-01
Many animal species produce repetitive sounds at regular intervals. This regularity can be used for automatic recognition of the sounds, providing improved detection at a given signal-to-noise ratio. Here, the detection of sperm whale sounds is examined. Sperm whales produce highly repetitive ``regular clicks'' at periods of about 0.2-2 s, and faster click trains in certain behavioral contexts. The following detection procedure was tested: a spectrogram was computed; values within a certain frequency band were summed; time windowing was applied; each windowed segment was autocorrelated; and the maximum of the autocorrelation within a certain periodicity range was chosen. This procedure was tested on sets of recordings containing sperm whale sounds and interfering sounds, both low-frequency recordings from autonomous hydrophones and high-frequency ones from towed hydrophone arrays. An optimization procedure iteratively varies detection parameters (spectrogram frame length and frequency range, window length, periodicity range, etc.). Performance of various sets of parameters was measured by setting a standard level of allowable missed calls, and the resulting optimium parameters are described. Performance is also compared to that of a neural network trained using the data sets. The method is also demonstrated for sounds of blue whales, minke whales, and seismic airguns. [Funding from ONR.
Regular Language Constrained Sequence Alignment Revisited
NASA Astrophysics Data System (ADS)
Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal
Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, Arslan [1] introduced the Regular Language Constrained Sequence Alignment Problem and proposed an O(n 2 t 4) time and O(n 2 t 2) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the non-deterministic automaton, which is given as input. Chung et al. [2] proposed a faster O(n 2 t 3) time algorithm for the same problem. In this paper, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n 2 t 3/logt). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the run time complexity in the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense.
Regularization Parameter Selections via Generalized Information Criterion.
Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling
2010-03-01
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material.
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Sparsity regularization for parameter identification problems
NASA Astrophysics Data System (ADS)
Jin, Bangti; Maass, Peter
2012-12-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓp-penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓp sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik
Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.
Regular self-motion of a liquid droplet powered by the chemical marangoni effect.
Nagai, Ken; Sumino, Yutaka; Yoshikawa, Kenichi
2007-04-15
We describe here our recent work on spontaneous regular motion of liquid droplet powered by the chemical Marangoni effect under spatially symmetric conditions. It is shown that a spontaneously crawling oil droplet on a glass substrate with a nonequilibrium chemical condition of cationic surfactant exhibits regular rhythmic motion in a quasi-one-dimensional vessel, whereas irregular motion is induced in a two-dimensionally isotropic environment. Such behavior of a droplet demonstrates that spontaneous regular motion can be generated under fluctuating conditions by imposing an appropriate geometry. As another system, we introduce alcohol droplet moving spontaneously on water surface. The droplet spontaneously forms a specific morphology depending on its volume, causing specific mode of translational motion. An alcohol droplet with a smaller volume floating on water surface moves irregularly. On the other hand, a droplet with a larger volume undergoes vectorial motion accompanied by deformation into an asymmetric shape. This result suggests a scenario on the emergence of regular motion coupled with geometrical pattern formation under far-from-equilibrium conditions.
On the Global Regularity for the 3D Magnetohydrodynamics Equations Involving Partial Components
NASA Astrophysics Data System (ADS)
Qian, Chenyin
2017-01-01
In this paper, we study the regularity criteria of the three-dimensional magnetohydrodynamics system in terms of some components of the velocity field and the magnetic field. With a decomposition of the four nonlinear terms of the system, this result gives an improvement of some corresponding previous works (Yamazaki in J Math Fluid Mech 16: 551-570, 2014; Jia and Zhou in Nonlinear Anal Real World Appl 13: 410-418, 2012).
Convergence and Fluctuations of Regularized Tyler Estimators
NASA Astrophysics Data System (ADS)
Kammoun, Abla; Couillet, Romain; Pascal, Ferderic; Alouini, Mohamed-Slim
2016-02-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter $\\rho$. While a high value of $\\rho$ is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations $n$ and/or their size $N$ increase together. First asymptotic results have recently been obtained under the assumption that $N$ and $n$ are large and commensurable. Interestingly, no results concerning the regime of $n$ going to infinity with $N$ fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult $N$ and $n$ large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when $n\\to\\infty$ with $N$ fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter $\\rho$.
Regular physical exercise: way to healthy life.
Siddiqui, N I; Nessa, A; Hossain, M A
2010-01-01
Any bodily activity or movement that enhances and maintains overall health and physical fitness is called physical exercise. Habit of regular physical exercise has got numerous benefits. Exercise is of various types such as aerobic exercise, anaerobic exercise and flexibility exercise. Aerobic exercise moves the large muscle groups with alternate contraction and relaxation, forces to deep breath, heart to pump more blood with adequate tissue oxygenation. It is also called cardiovascular exercise. Examples of aerobic exercise are walking, running, jogging, swimming etc. In anaerobic exercise, there is forceful contraction of muscle with stretching, usually mechanically aided and help to build up muscle strength and muscle bulk. Examples are weight lifting, pulling, pushing, sprinting etc. Flexibility exercise is one type of stretching exercise to improve the movements of muscles, joints and ligaments. Walking is a good example of aerobic exercise, easy to perform, safe, effective, does not require any training or equipment and less chance of injury. Regular 30 minutes brisk walking in the morning with 150 minutes per week is a good exercise. Regular exercise improves the cardiovascular status, reduces the risk of cardiac disease, high blood pressure and cerebrovascular disease. It reduces body weight, improves insulin sensitivity, helps in glycemic control, prevents obesity and diabetes mellitus. It is helpful for relieving anxiety, stress, brings a sense of well being and overall physical fitness. Global trend is mechanization, labor savings and leading to epidemic of long term chronic diseases like diabetes mellitus, cardiovascular diseases etc. All efforts should be made to create public awareness promoting physical activity, physically demanding recreational pursuits and providing adequate facilities.
Information-theoretic semi-supervised metric learning via entropy regularization.
Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi
2014-08-01
We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.
Total variation regularization for fMRI-based prediction of behavior.
Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand
2011-07-01
While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional magnetic resonance imaging (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioral variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the l(1) norm of the image gradient, also known as its total variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification.
Quaternion regularization and trajectory motion control in celestial mechanics and astrodynamics: II
NASA Astrophysics Data System (ADS)
Chelnokov, Yu. N.
2014-07-01
Problems of regularization in celestial mechanics and astrodynamics are considered, and basic regular quaternion models for celestial mechanics and astrodynamics are presented. It is shown that the effectiveness of analytical studies and numerical solutions to boundary value problems of controlling the trajectory motion of spacecraft can be improved by using quaternion models of astrodynamics. In this second part of the paper, specific singularity-type features (division by zero) are considered. They result from using classical equations in angular variables (particularly in Euler variables) in celestial mechanics and astrodynamics and can be eliminated by using Euler (Rodrigues-Hamilton) parameters and Hamilton quaternions. Basic regular (in the above sense) quaternion models of celestial mechanics and astrodynamics are considered; these include equations of trajectory motion written in nonholonomic, orbital, and ideal moving trihedrals whose rotational motions are described by Euler parameters and quaternions of turn; and quaternion equations of instantaneous orbit orientation of a celestial body (spacecraft). New quaternion regular equations are derived for the perturbed three-dimensional two-body problem (spacecraft trajectory motion). These equations are constructed using ideal rectangular Hansen coordinates and quaternion variables, and they have additional advantages over those known for regular Kustaanheimo-Stiefel equations.
Total variation regularization for fMRI-based prediction of behavior
Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand
2011-01-01
While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080
Regular Expression Analysis of Procedures and Exceptions,
1985-06-01
L D-RI63 817 REGULAR EXPRESSION ANALYSIS OF PROCEDURES AND1/7 EXCEPTIONS(U) ROYAL SIGNALS AND RADAR ESTABLISHNENT NALVERN ( ENGLAND ) J M FOSTER JUN 85...34 means composition in sequence, that is a.b denotes the path formed by a followed by b. The constant 1I. " is a unit for the dot operation, so L.a - a...s.c a b.c right distribution of . over . - l.a-a I is a left unit for. a.1 -a I is a right unit for. . .. *. . . ". .. -.•. S
Regularization ambiguities in loop quantum gravity
NASA Astrophysics Data System (ADS)
Perez, Alejandro
2006-02-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem—the existence of well-behaved regularization of the constraints—is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant “point-splitting” regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions—due to the difficulties associated to the definition of the physical inner product—it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we
The regular state in higher order gravity
NASA Astrophysics Data System (ADS)
Cotsakis, Spiros; Kadry, Seifedine; Trachilis, Dimitrios
2016-08-01
We consider the higher-order gravity theory derived from the quadratic Lagrangian R + 𝜖R2 in vacuum as a first-order (ADM-type) system with constraints, and build time developments of solutions of an initial value formulation of the theory. We show that all such solutions, if analytic, contain the right number of free functions to qualify as general solutions of the theory. We further show that any regular analytic solution which satisfies the constraints and the evolution equations can be given in the form of an asymptotic formal power series expansion.
Total-variation regularization with bound constraints
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Multichannel image regularization using anisotropic geodesic filtering
Grazzini, Jacopo A
2010-01-01
This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.
Regularization ambiguities in loop quantum gravity
Perez, Alejandro
2006-02-15
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
New Regularization Method for EXAFS Analysis
Reich, Tatiana Ye.; Reich, Tobias; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry
2007-02-02
As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4{center_dot}2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.
Taslaman, Leo; Nilsson, Björn
2012-01-01
Non-negative matrix factorization (NMF) condenses high-dimensional data into lower-dimensional models subject to the requirement that data can only be added, never subtracted. However, the NMF problem does not have a unique solution, creating a need for additional constraints (regularization constraints) to promote informative solutions. Regularized NMF problems are more complicated than conventional NMF problems, creating a need for computational methods that incorporate the extra constraints in a reliable way. We developed novel methods for regularized NMF based on block-coordinate descent with proximal point modification and a fast optimization procedure over the alpha simplex. Our framework has important advantages in that it (a) accommodates for a wide range of regularization terms, including sparsity-inducing terms like the L1 penalty, (b) guarantees that the solutions satisfy necessary conditions for optimality, ensuring that the results have well-defined numerical meaning, (c) allows the scale of the solution to be controlled exactly, and (d) is computationally efficient. We illustrate the use of our approach on in the context of gene expression microarray data analysis. The improvements described remedy key limitations of previous proposals, strengthen the theoretical basis of regularized NMF, and facilitate the use of regularized NMF in applications.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Tomographic laser absorption spectroscopy using Tikhonov regularization.
Guha, Avishek; Schoegl, Ingmar
2014-12-01
The application of tunable diode laser absorption spectroscopy (TDLAS) to flames with nonhomogeneous temperature and concentration fields is an area where only few studies exist. Experimental work explores the performance of tomographic reconstructions of species concentration and temperature profiles from wavelength-modulated TDLAS measurements within the plume of an axisymmetric McKenna burner. Water vapor transitions at 1391.67 and 1442.67 nm are probed using calibration-free wavelength modulation spectroscopy with second harmonic detection (WMS-2f). A single collimated laser beam is swept parallel to the burner surface, where scans yield pairs of line-of-sight (LOS) data at multiple radial locations. Radial profiles of absorption data are reconstructed using Tikhonov regularized Abel inversion, which suppresses the amplification of experimental noise that is typically observed for reconstructions with high spatial resolution. Based on spectral data reconstructions, temperatures and mole fractions are calculated point-by-point. Here, a least-squares approach addresses difficulties due to modulation depths that cannot be universally optimized due to a nonuniform domain. Experimental results show successful reconstructions of temperature and mole fraction profiles based on two-transition, nonoptimally modulated WMS-2f and Tikhonov regularized Abel inversion, and thus validate the technique as a viable diagnostic tool for flame measurements.
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Accelerating Large Data Analysis By Exploiting Regularities
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Motion regularization for matting motion blurred objects.
Lin, Hai Ting; Tai, Yu-Wing; Brown, Michael S
2011-11-01
This paper addresses the problem of matting motion blurred objects from a single image. Existing single image matting methods are designed to extract static objects that have fractional pixel occupancy. This arises because the physical scene object has a finer resolution than the discrete image pixel and therefore only occupies a fraction of the pixel. For a motion blurred object, however, fractional pixel occupancy is attributed to the object’s motion over the exposure period. While conventional matting techniques can be used to matte motion blurred objects, they are not formulated in a manner that considers the object’s motion and tend to work only when the object is on a homogeneous background. We show how to obtain better alpha mattes by introducing a regularization term in the matting formulation to account for the object’s motion. In addition, we outline a method for estimating local object motion based on local gradient statistics from the original image. For the sake of completeness, we also discuss how user markup can be used to denote the local direction in lieu of motion estimation. Improvements to alpha mattes computed with our regularization are demonstrated on a variety of examples.
Nonlinear regularization techniques for seismic tomography
Loris, I. Douma, H.; Nolet, G.; Regone, C.
2010-02-01
The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, l{sub 2} penalties are compared to so-called sparsity promoting l{sub 1} and l{sub 0} penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an l{sub 2} norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer l{sub 1} damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple l{sub 2} minimization ('Tikhonov regularization') which should be avoided. In some of our examples, the l{sub 0} method produced notable artifacts. In addition we show how nonlinear l{sub 1} methods for finding sparse models can be competitive in speed with the widely used l{sub 2} methods, certainly under noisy conditions, so that there is no need to shun l{sub 1} penalizations.
Words cluster phonetically beyond phonotactic regularities.
Dautriche, Isabelle; Mahowald, Kyle; Gibson, Edward; Christophe, Anne; Piantadosi, Steven T
2017-06-01
Recent evidence suggests that cognitive pressures associated with language acquisition and use could affect the organization of the lexicon. On one hand, consistent with noisy channel models of language (e.g., Levy, 2008), the phonological distance between wordforms should be maximized to avoid perceptual confusability (a pressure for dispersion). On the other hand, a lexicon with high phonological regularity would be simpler to learn, remember and produce (e.g., Monaghan et al., 2011) (a pressure for clumpiness). Here we investigate wordform similarity in the lexicon, using measures of word distance (e.g., phonological neighborhood density) to ask whether there is evidence for dispersion or clumpiness of wordforms in the lexicon. We develop a novel method to compare lexicons to phonotactically-controlled baselines that provide a null hypothesis for how clumpy or sparse wordforms would be as the result of only phonotactics. Results for four languages, Dutch, English, German and French, show that the space of monomorphemic wordforms is clumpier than what would be expected by the best chance model according to a wide variety of measures: minimal pairs, average Levenshtein distance and several network properties. This suggests a fundamental drive for regularity in the lexicon that conflicts with the pressure for words to be as phonologically distinct as possible. Copyright © 2017 Elsevier B.V. All rights reserved.
An efficient, advanced regularized inversion method for highly parameterized environmental models
NASA Astrophysics Data System (ADS)
Skahill, B. E.; Baggett, J. S.
2008-12-01
The Levenberg-Marquardt method of computer based parameter estimation can be readily modified in cases of high parameter insensitivity and correlation by the inclusion of various regularization devices to maintain numerical stability and robustness, including; for example, Tikhonov regularization and truncated singular value decomposition. With Tikhonov regularization, where parameters or combinations of parameters cannot be uniquely estimated, they are provided with values or assigned relationships with other parameters that are decreed to be realistic by the modeler. Tikhonov schemes provide a mechanism for assimilation of valuable "outside knowledge" into the inversion process, with the result that parameter estimates, thus informed by a modeler's expertise, are more suitable for use in the making of important predictions by that model than would otherwise be the case. However, by maintaining the high dimensionality of the adjustable parameter space, they can potentially be computational burdensome. Moreover, while Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. We will present results associated with development efforts that include an accelerated Levenberg-Marquardt local search algorithm adapted for Tikhonov regularization, and a technique which allows relative regularization weights to be estimated as parameters through the calibration process itself (Doherty and Skahill, 2006). This new method, encapsulated in the MICUT software (Skahill et al., 2008) will be compared, in terms of efficiency and enforcement of regularization relationships, with the SVD Assist method (Tonkin and Doherty, 2005) contained in the popular PEST package by considering various watershed
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The
Hawking fluxes and anomalies in rotating regular black holes with a time-delay
NASA Astrophysics Data System (ADS)
Takeuchi, Shingo
2016-11-01
Based on the anomaly cancellation method we compute the Hawking fluxes (the Hawking thermal flux and the total flux of energy-momentum tensor) from a four-dimensional rotating regular black hole with a time-delay. To this purpose, in the three metrics proposed in [1], we try to perform the dimensional reduction in which the anomaly cancellation method is feasible at the near-horizon region in a general scalar field theory. As a result we can demonstrate that the dimensional reduction is possible in two of those metrics. Hence we perform the anomaly cancellation method and compute the Hawking fluxes in those two metrics. Our Hawking fluxes involve three effects: (1) quantum gravity effect regularizing the core of the black holes, (2) rotation of the black hole, (3) time-delay. Further in this paper toward the metric in which the dimensional could not be performed, we argue that it would be some problematic metric, and mention its cause. The Hawking fluxes we compute in this study could be considered to correspond to more realistic Hawking fluxes. Further what Hawking fluxes can be obtained from the anomaly cancellation method would be interesting in terms of the relation between a consistency of quantum field theories and black hole thermodynamics.
Efficient Regularized Regression with L0 Penalty for Variable Selection and Network Construction
2016-01-01
Variable selections for regression with high-dimensional big data have found many applications in bioinformatics and computational biology. One appealing approach is the L0 regularized regression which penalizes the number of nonzero features in the model directly. However, it is well known that L0 optimization is NP-hard and computationally challenging. In this paper, we propose efficient EM (L0EM) and dual L0EM (DL0EM) algorithms that directly approximate the L0 optimization problem. While L0EM is efficient with large sample size, DL0EM is efficient with high-dimensional (n ≪ m) data. They also provide a natural solution to all Lp p ∈ [0,2] problems, including lasso with p = 1 and elastic net with p ∈ [1,2]. The regularized parameter λ can be determined through cross validation or AIC and BIC. We demonstrate our methods through simulation and high-dimensional genomic data. The results indicate that L0 has better performance than lasso, SCAD, and MC+, and L0 with AIC or BIC has similar performance as computationally intensive cross validation. The proposed algorithms are efficient in identifying the nonzero variables with less bias and constructing biologically important networks with high-dimensional big data. PMID:27843486
Quantum search algorithms on a regular lattice
Hein, Birgit; Tanner, Gregor
2010-07-15
Quantum algorithms for searching for one or more marked items on a d-dimensional lattice provide an extension of Grover's search algorithm including a spatial component. We demonstrate that these lattice search algorithms can be viewed in terms of the level dynamics near an avoided crossing of a one-parameter family of quantum random walks. We give approximations for both the level splitting at the avoided crossing and the effectively two-dimensional subspace of the full Hilbert space spanning the level crossing. This makes it possible to give the leading order behavior for the search time and the localization probability in the limit of large lattice size including the leading order coefficients. For d=2 and d=3, these coefficients are calculated explicitly. Closed form expressions are given for higher dimensions.
Incremental projection approach of regularization for inverse problems
Souopgui, Innocent; Ngodock, Hans E.; Vidard, Arthur Le Dimet, François-Xavier
2016-10-15
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Anderson localization and ergodicity on random regular graphs
NASA Astrophysics Data System (ADS)
Tikhonov, K. Â. S.; Mirlin, A. Â. D.; Skvortsov, M. Â. A.
2016-12-01
A numerical study of Anderson transition on random regular graphs (RRGs) with diagonal disorder is performed. The problem can be described as a tight-binding model on a lattice with N sites that is locally a tree with constant connectivity. In a certain sense, the RRG ensemble can be seen as an infinite-dimensional (d →∞ ) cousin of the Anderson model in d dimensions. We focus on the delocalized side of the transition and stress the importance of finite-size effects. We show that the data can be interpreted in terms of the finite-size crossover from a small (N ≪Nc ) to a large (N ≫Nc ) system, where Nc is the correlation volume diverging exponentially at the transition. A distinct feature of this crossover is a nonmonotonicity of the spectral and wave-function statistics, which is related to properties of the critical phase in the studied model and renders the finite-size analysis highly nontrivial. Our results support an analytical prediction that states in the delocalized phase (and at N ≫Nc ) are ergodic in the sense that their inverse participation ratio scales as 1 /N .
Regularized gene selection in cancer microarray meta-analysis.
Ma, Shuangge; Huang, Jian
2009-01-01
In cancer studies, it is common that multiple microarray experiments are conducted to measure the same clinical outcome and expressions of the same set of genes. An important goal of such experiments is to identify a subset of genes that can potentially serve as predictive markers for cancer development and progression. Analyses of individual experiments may lead to unreliable gene selection results because of the small sample sizes. Meta analysis can be used to pool multiple experiments, increase statistical power, and achieve more reliable gene selection. The meta analysis of cancer microarray data is challenging because of the high dimensionality of gene expressions and the differences in experimental settings amongst different experiments. We propose a Meta Threshold Gradient Descent Regularization (MTGDR) approach for gene selection in the meta analysis of cancer microarray data. The MTGDR has many advantages over existing approaches. It allows different experiments to have different experimental settings. It can account for the joint effects of multiple genes on cancer, and it can select the same set of cancer-associated genes across multiple experiments. Simulation studies and analyses of multiple pancreatic and liver cancer experiments demonstrate the superior performance of the MTGDR. The MTGDR provides an effective way of analyzing multiple cancer microarray studies and selecting reliable cancer-associated genes.
Regularity of feedback opertors for boundary control of thermal processes
Burns, J.A.; Rubio, D. |; King, B.B.
1994-12-31
This note is concerned with the regularity of functional gains for boundary control of thermal processes. Functional gains are kernel functions in integral representations of feedback operators computed by solving algebraic Riccati equations arising from infinite dimensional LQR control problems. In and, Burns and King showed that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smooths solutions of the corresponding Riccati equation. When this result is applied to problems with distributed controllers it can be established that the resulting feedback operator is also smooth. However, it is the continuity of the input operator that leads to a positive result in this case. When boundary control is applied, the input operator is unbounded and the analysis in fails. However, for 1D heat flow it is possible to recover because of the special nature of the problem. The problem is still not settled for the 2D and 3D heat equation. In this paper we present numerical evidence to suggest that the functional gains exist and have compact support near the boundary where the control is applied. Both properties are important in addressing sensor and actuator location problems and they have practical implications in the design of reduced order controllers for PDE systems.
Revealing hidden regularities with a general approach to fission
NASA Astrophysics Data System (ADS)
Schmidt, Karl-Heinz; Jurado, Beatriz
2015-12-01
Selected aspects of a general approach to nuclear fission are described with the focus on the possible benefit of meeting the increasing need of nuclear data for the existing and future emerging nuclear applications. The most prominent features of this approach are the evolution of quantum-mechanical wave functions in systems with complex shape, memory effects in the dynamics of stochastic processes, the influence of the Second Law of thermodynamics on the evolution of open systems in terms of statistical mechanics, and the topological properties of a continuous function in multi-dimensional space. It is demonstrated that this approach allows reproducing the measured fission barriers and the observed properties of the fission fragments and prompt neutrons. Our approach is based on sound physical concepts, as demonstrated by the fact that practically all the parameters have a physical meaning, and reveals a high degree of regularity in the fission observables. Therefore, we expect a good predictive power within the region extending from Po isotopes to Sg isotopes where the model parameters have been adjusted. Our approach can be extended to other regions provided that there is enough empirical information available that allows determining appropriate values of the model parameters. Possibilities for combining this general approach with microscopic models are suggested. These are supposed to enhance the predictive power of the general approach and to help improving or adjusting the microscopic models. This could be a way to overcome the present difficulties for producing evaluations with the required accuracy.
Charge regularization in phase separating polyelectrolyte solutions.
Muthukumar, M; Hua, Jing; Kundagrami, Arindam
2010-02-28
Theoretical investigations of phase separation in polyelectrolyte solutions have so far assumed that the effective charge of the polyelectrolyte chains is fixed. The ability of the polyelectrolyte chains to self-regulate their effective charge due to the self-consistent coupling between ionization equilibrium and polymer conformations, depending on the dielectric constant, temperature, and polymer concentration, affects the critical phenomena and phase transitions drastically. By considering salt-free polyelectrolyte solutions, we show that the daughter phases have different polymer charges from that of the mother phase. The critical point is also altered significantly by the charge self-regularization of the polymer chains. This work extends the progress made so far in the theory of phase separation of strong polyelectrolyte solutions to a higher level of understanding by considering chains which can self-regulate their charge.
Regularity of inviscid shell models of turbulence
NASA Astrophysics Data System (ADS)
Constantin, Peter; Levant, Boris; Titi, Edriss S.
2007-01-01
In this paper we continue the analytical study of the sabra shell model of energy turbulent cascade. We prove the global existence of weak solutions of the inviscid sabra shell model, and show that these solutions are unique for some short interval of time. In addition, we prove that the solutions conserve energy, provided that the components of the solution satisfy ∣un∣≤Ckn-1/3[nlog(n+1)]-1 for some positive absolute constant C , which is the analog of the Onsager’s conjecture for the Euler’s equations. Moreover, we give a Beal-Kato-Majda type criterion for the blow-up of solutions of the inviscid sabra shell model and show the global regularity of the solutions in the “two-dimensional” parameters regime.
Regularization of Nutation Time Series at GSFC
NASA Astrophysics Data System (ADS)
Le Bail, K.; Gipson, J. M.; Bolotin, S.
2012-12-01
VLBI is unique in its ability to measure all five Earth orientation parameters. In this paper we focus on the two nutation parameters which characterize the orientation of the Earth's rotation axis in space. We look at the periodicities and the spectral characteristics of these parameters for both R1 and R4 sessions independently. The study of the most significant periodic signals for periods shorter than 600 days is common for these four time series (period of 450 days), and the type of noise determined by the Allan variance is a white noise for the four series. To investigate methods of regularizing the series, we look at a Singular Spectrum Analysis-derived method and at the Kalman filter. The two methods adequately reproduce the tendency of the nutation time series, but the resulting series are noisier using the Singular Spectrum Analysis-derived method.
Thermodynamics of regular accelerating black holes
NASA Astrophysics Data System (ADS)
Astorino, Marco
2017-03-01
Using the covariant phase space formalism, we compute the conserved charges for a solution, describing an accelerating and electrically charged Reissner-Nordstrom black hole. The metric is regular provided that the acceleration is driven by an external electric field, in spite of the usual string of the standard C-metric. The Smarr formula and the first law of black hole thermodynamics are fulfilled. The resulting mass has the same form of the Christodoulou-Ruffini irreducible mass. On the basis of these results, we can extrapolate the mass and thermodynamics of the rotating C-metric, which describes a Kerr-Newman-(A)dS black hole accelerated by a pulling string.
Regularization destriping of remote sensing imagery
NASA Astrophysics Data System (ADS)
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes
(strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
Regularized discriminative direction for shape difference analysis.
Zhou, Luping; Hartley, Richard; Wang, Lei; Lieby, Paulette; Barnes, Nick
2008-01-01
The "discriminative direction" has been proven useful to reveal the subtle difference between two anatomical shape classes. When a shape moves along this direction, its deformation will best manifest the class difference detected by a kernel classifier. However, we observe that such a direction cannot maintain a shape's "anatomical" correctness, introducing spurious difference. To overcome this drawback, we develop a regularized discriminative direction by requiring a shape to conform to its population distribution when it deforms along the discriminative direction. Instead of iterative optimization, an analytic solution is provided to directly work out this direction. Experimental study shows its superior performance in detecting and localizing the difference of hippocampal shapes for sex. The result is supported by other independent research in the same domain.
Regularity of free boundaries a heuristic retro
Caffarelli, Luis A.; Shahgholian, Henrik
2015-01-01
This survey concerns regularity theory of a few free boundary problems that have been developed in the past half a century. Our intention is to bring up different ideas and techniques that constitute the fundamentals of the theory. We shall discuss four different problems, where approaches are somewhat different in each case. Nevertheless, these problems can be divided into two groups: (i) obstacle and thin obstacle problem; (ii) minimal surfaces, and cavitation flow of a perfect fluid. In each case, we shall only discuss the methodology and approaches, giving basic ideas and tools that have been specifically designed and tailored for that particular problem. The survey is kept at a heuristic level with mainly geometric interpretation of the techniques and situations in hand. PMID:26261372
Charge regularization in phase separating polyelectrolyte solutions
Muthukumar, M.; Hua, Jing; Kundagrami, Arindam
2010-01-01
Theoretical investigations of phase separation in polyelectrolyte solutions have so far assumed that the effective charge of the polyelectrolyte chains is fixed. The ability of the polyelectrolyte chains to self-regulate their effective charge due to the self-consistent coupling between ionization equilibrium and polymer conformations, depending on the dielectric constant, temperature, and polymer concentration, affects the critical phenomena and phase transitions drastically. By considering salt-free polyelectrolyte solutions, we show that the daughter phases have different polymer charges from that of the mother phase. The critical point is also altered significantly by the charge self-regularization of the polymer chains. This work extends the progress made so far in the theory of phase separation of strong polyelectrolyte solutions to a higher level of understanding by considering chains which can self-regulate their charge. PMID:20192314
Regularization for Atmospheric Temperature Retrieval Problems
NASA Technical Reports Server (NTRS)
Velez-Reyes, Miguel; Galarza-Galarza, Ruben
1997-01-01
Passive remote sensing of the atmosphere is used to determine the atmospheric state. A radiometer measures microwave emissions from earth's atmosphere and surface. The radiance measured by the radiometer is proportional to the brightness temperature. This brightness temperature can be used to estimate atmospheric parameters such as temperature and water vapor content. These quantities are of primary importance for different applications in meteorology, oceanography, and geophysical sciences. Depending on the range in the electromagnetic spectrum being measured by the radiometer and the atmospheric quantities to be estimated, the retrieval or inverse problem of determining atmospheric parameters from brightness temperature might be linear or nonlinear. In most applications, the retrieval problem requires the inversion of a Fredholm integral equation of the first kind making this an ill-posed problem. The numerical solution of the retrieval problem requires the transformation of the continuous problem into a discrete problem. The ill-posedness of the continuous problem translates into ill-conditioning or ill-posedness of the discrete problem. Regularization methods are used to convert the ill-posed problem into a well-posed one. In this paper, we present some results of our work in applying different regularization techniques to atmospheric temperature retrievals using brightness temperatures measured with the SSM/T-1 sensor. Simulation results are presented which show the potential of these techniques to improve temperature retrievals. In particular, no statistical assumptions are needed and the algorithms were capable of correctly estimating the temperature profile corner at the tropopause independent of the initial guess.
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Black hole mimickers: Regular versus singular behavior
Lemos, Jose P. S.; Zaslavskii, Oleg B.
2008-07-15
Black hole mimickers are possible alternatives to black holes; they would look observationally almost like black holes but would have no horizon. The properties in the near-horizon region where gravity is strong can be quite different for both types of objects, but at infinity it could be difficult to discern black holes from their mimickers. To disentangle this possible confusion, we examine the near-horizon properties, and their connection with far away asymptotic properties, of some candidates to black mimickers. We study spherically symmetric uncharged or charged but nonextremal objects, as well as spherically symmetric charged extremal objects. Within the uncharged or charged but nonextremal black hole mimickers, we study nonextremal {epsilon}-wormholes on the threshold of the formation of an event horizon, of which a subclass are called black foils, and gravastars. Within the charged extremal black hole mimickers we study extremal {epsilon}-wormholes on the threshold of the formation of an event horizon, quasi-black holes, and wormholes on the basis of quasi-black holes from Bonnor stars. We elucidate whether or not the objects belonging to these two classes remain regular in the near-horizon limit. The requirement of full regularity, i.e., finite curvature and absence of naked behavior, up to an arbitrary neighborhood of the gravitational radius of the object enables one to rule out potential mimickers in most of the cases. A list ranking the best black hole mimickers up to the worst, both nonextremal and extremal, is as follows: wormholes on the basis of extremal black holes or on the basis of quasi-black holes, quasi-black holes, wormholes on the basis of nonextremal black holes (black foils), and gravastars. Since in observational astrophysics it is difficult to find extremal configurations (the best mimickers in the ranking), whereas nonextremal configurations are really bad mimickers, the task of distinguishing black holes from their mimickers seems to
Regularization of Instantaneous Frequency Attribute Computations
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
The Essential Special Education Guide for the Regular Education Teacher
ERIC Educational Resources Information Center
Burns, Edward
2007-01-01
The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…
The Essential Special Education Guide for the Regular Education Teacher
ERIC Educational Resources Information Center
Burns, Edward
2007-01-01
The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
ERIC Educational Resources Information Center
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
ERIC Educational Resources Information Center
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
39 CFR 6.1 - Regular meetings, annual meeting.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...
5 CFR 532.203 - Structure of regular wage schedules.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Structure of regular wage schedules. 532... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.203 Structure of regular wage schedules. (a) Each nonsupervisory and leader regular wage schedule shall have 15 grades, which shall be designated as...
ERIC Educational Resources Information Center
Brown, Joyceanne; And Others
1991-01-01
This survey of 201 regular education teachers found that the most frequently used prereferral strategies used to facilitate classroom adjustment and achievement were consultation with other professionals, parent conferences, and behavior management techniques. Elementary teachers implemented more strategies than secondary-level teachers.…
The connection between regularization operators and support vector kernels.
Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert
1998-06-01
In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.
NASA Astrophysics Data System (ADS)
Filippone, Michele; Brouwer, Piet W.
2016-12-01
Tunneling between a point contact and a one-dimensional wire is usually described with the help of a tunneling Hamiltonian that contains a δ function in position space. Whereas the leading-order contribution to the tunneling current is independent of the way this δ function is regularized, higher-order corrections with respect to the tunneling amplitude are known to depend on the regularization. Instead of regularizing the δ function in the tunneling Hamiltonian, one may also obtain a finite tunneling current by invoking the ultraviolet cutoffs in a field-theoretic description of the electrons in the one-dimensional conductor, a procedure that is often used in the literature. For the latter case, we show that standard ultraviolet cutoffs lead to different results for the tunneling current in fermionic and bosonized formulations of the theory, when going beyond leading order in the tunneling amplitude. We show how to recover the standard fermionic result using the formalism of functional bosonization and revisit the tunneling current to leading order in the interacting case.
Preparation of Regular Specimens for Atom Probes
NASA Technical Reports Server (NTRS)
Kuhlman, Kim; Wishard, James
2003-01-01
A method of preparation of specimens of non-electropolishable materials for analysis by atom probes is being developed as a superior alternative to a prior method. In comparison with the prior method, the present method involves less processing time. Also, whereas the prior method yields irregularly shaped and sized specimens, the present developmental method offers the potential to prepare specimens of regular shape and size. The prior method is called the method of sharp shards because it involves crushing the material of interest and selecting microscopic sharp shards of the material for use as specimens. Each selected shard is oriented with its sharp tip facing away from the tip of a stainless-steel pin and is glued to the tip of the pin by use of silver epoxy. Then the shard is milled by use of a focused ion beam (FIB) to make the shard very thin (relative to its length) and to make its tip sharp enough for atom-probe analysis. The method of sharp shards is extremely time-consuming because the selection of shards must be performed with the help of a microscope, the shards must be positioned on the pins by use of micromanipulators, and the irregularity of size and shape necessitates many hours of FIB milling to sharpen each shard. In the present method, a flat slab of the material of interest (e.g., a polished sample of rock or a coated semiconductor wafer) is mounted in the sample holder of a dicing saw of the type conventionally used to cut individual integrated circuits out of the wafers on which they are fabricated in batches. A saw blade appropriate to the material of interest is selected. The depth of cut and the distance between successive parallel cuts is made such that what is left after the cuts is a series of thin, parallel ridges on a solid base. Then the workpiece is rotated 90 and the pattern of cuts is repeated, leaving behind a square array of square posts on the solid base. The posts can be made regular, long, and thin, as required for samples
Error analysis for matrix elastic-net regularization algorithms.
Li, Hong; Chen, Na; Li, Luoqing
2012-05-01
Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.
On a space-frequency regularization for source reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, Mathieu; De Smet, Olivier
2016-09-01
To identify mechanical sources acting on a structure, Tikhonov-like regularizations are generally used. These approaches, referred to as additive regularizations, require the calculation of a regularization parameter from adapted selection procedures such as the L- curve method. However, such selection procedures can be computationally intensive. In this contribution, a space-frequency multiplicative regularization is introduced. The proposed strategy has the merit of avoiding the need for the determination of a regularization parameter beforehand, while taking advantage of one's prior knowledge of the type of the sources as well as the nature of the excitation signal. By construction, the regularized solution is computed in an iterative manner, which allows adapting the importance of the regularization term all along the resolution process. The validity of the proposed approach is illustrated numerically on a simply supported beam.
Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking
NASA Astrophysics Data System (ADS)
Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.
2017-03-01
Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.
Manifold regularized multitask learning for semi-supervised multilabel image classification.
Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J
2013-02-01
It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.
Full Regularity for a C*-ALGEBRA of the Canonical Commutation Relations
NASA Astrophysics Data System (ADS)
Grundling, Hendrik; Neeb, Karl-Hermann
The Weyl algebra — the usual C*-algebra employed to model the canonical commutation relations (CCRs), has a well-known defect, in that it has a large number of representations which are not regular and these cannot model physical fields. Here, we construct explicitly a C*-algebra which can reproduce the CCRs of a countably dimensional symplectic space (S, B) and such that its representation set is exactly the full set of regular representations of the CCRs. This construction uses Blackadar's version of infinite tensor products of nonunital C*-algebras, and it produces a "host algebra" (i.e. a generalized group algebra, explained below) for the σ-representation theory of the Abelian group S where σ(·,·) ≔ eiB(·,·)/2. As an easy application, it then follows that for every regular representation of /line{Δ (S, B)} on a separable Hilbert space, there is a direct integral decomposition of it into irreducible regular representations (a known result).
Information theoretic regularization in diffuse optical tomography.
Panagiotou, Christos; Somayajula, Sangeetha; Gibson, Adam P; Schweiger, Martin; Leahy, Richard M; Arridge, Simon R
2009-05-01
Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering the parameters of interest involves solving a nonlinear and highly ill-posed inverse problem. This paper examines the possibility of regularizing DOT via the introduction of a priori information from alternative high-resolution anatomical modalities, using the information theory concepts of mutual information (MI) and joint entropy (JE). Such functionals evaluate the similarity between the reconstructed optical image and the prior image while bypassing the multimodality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the two modalities. By introducing structural information, we aim to improve the spatial resolution and quantitative accuracy of the solution. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results using numerical simulations. In addition we compare the performance of MI and JE. Finally, we have adopted a method for fast marginal entropy evaluation and optimization by modifying the objective function and extending it to the JE case. We demonstrate its use on an image reconstruction framework and show significant computational savings.
Color correction optimization with hue regularization
NASA Astrophysics Data System (ADS)
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Determinants of Scanpath Regularity in Reading.
von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan
2015-09-01
Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research.
Menstrual Bleeding Patterns Among Regularly Menstruating Women
Dasharathy, Sonya S.; Mumford, Sunni L.; Pollack, Anna Z.; Perkins, Neil J.; Mattison, Donald R.; Wactawski-Wende, Jean; Schisterman, Enrique F.
2012-01-01
Menstrual bleeding patterns are considered relevant indicators of reproductive health, though few studies have evaluated patterns among regularly menstruating premenopausal women. The authors evaluated self-reported bleeding patterns, incidence of spotting, and associations with reproductive hormones among 201 women in the BioCycle Study (2005–2007) with 2 consecutive cycles. Bleeding patterns were assessed by using daily questionnaires and pictograms. Marginal structural models were used to evaluate associations between endogenous hormone concentrations and subsequent total reported blood loss and bleeding length by weighted linear mixed-effects models and weighted parametric survival analysis models. Women bled for a median of 5 days (standard deviation: 1.5) during menstruation, with heavier bleeding during the first 3 days. Only 4.8% of women experienced midcycle bleeding. Increased levels of follicle-stimulating hormone (β = 0.20, 95% confidence interval: 0.13, 0.27) and progesterone (β = 0.06, 95% confidence interval: 0.03, 0.09) throughout the cycle were associated with heavier menstrual bleeding, and higher follicle-stimulating hormone levels were associated with longer menses. Bleeding duration and volume were reduced after anovulatory compared with ovulatory cycles (geometric mean blood loss: 29.6 vs. 47.2 mL; P = 0.07). Study findings suggest that detailed characterizations of bleeding patterns may provide more insight than previously thought as noninvasive markers for endocrine status in a given cycle. PMID:22350580
Flip to Regular Triangulation and Convex Hull.
Gao, Mingcen; Cao, Thanh-Tung; Tan, Tiow-Seng
2017-02-01
Flip is a simple and local operation to transform one triangulation to another. It makes changes only to some neighboring simplices, without considering any attribute or configuration global in nature to the triangulation. Thanks to this characteristic, several flips can be independently applied to different small, non-overlapping regions of one triangulation. Such operation is favored when designing algorithms for data-parallel, massively multithreaded hardware, such as the GPU. However, most existing flip algorithms are designed to be executed sequentially, and usually need some restrictions on the execution order of flips, making them hard to be adapted to parallel computation. In this paper, we present an in depth study of flip algorithms in low dimensions, with the emphasis on the flexibility of their execution order. In particular, we propose a series of provably correct flip algorithms for regular triangulation and convex hull in 2D and 3D, with implementations for both CPUs and GPUs. Our experiment shows that our GPU implementation for constructing these structures from a given point set achieves up to two orders of magnitude of speedup over other popular single-threaded CPU implementation of existing algorithms.
Compression and regularization with the information bottleneck
NASA Astrophysics Data System (ADS)
Strouse, Dj; Schwab, David
Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer.
Reverberation mapping by regularized linear inversion
NASA Astrophysics Data System (ADS)
Krolik, Julian H.; Done, Christine
1995-02-01
Reverberation mapping of active galactic nucleus (AGN) emission-line regions requires the numerical deconvolution of two time series. We suggest the application of a new method, regularized linear inversion, to the solution of this problem. This method possesses many good features; it imposes no restrictions on the sign of the response function; it can provide clearly defined uncertainty estimates; it involves no guesswork about unmeasured data; it can give a clear indication of when the underlying convolution model is inadequate; and it is computationally very efficient. Using simulated data, we find the minimum S/N and length of the time series in order for this method to work satisfactorily. We also define guidelines for choosing the principal tunable parameter of the method and for interpreting the results. Finally, we reanalyze published data from the 1989 NGC 5548 campaign using this new method and compare the results to those previously obtained by maximum entropy analysis. For some lines we find good agreement, but for others, especially C III lambda(1909) and Si IV lambda(1400), we find significant differences. These can be attributed to the inability of the maximum entropy method to find negative values of the response function, but also illustrate the nonuniqueness of any deconvolution technique. We also find evidence that certain line light curves (e.g., C IV lambda(1549)) cannot be fully described by the simple linear convolution model.
Reverberation mapping by regularized linear inversion
NASA Technical Reports Server (NTRS)
Krolik, Julian H.; Done, Christine
1995-01-01
Reverberation mapping of active galactic nucleus (AGN) emission-line regions requires the numerical deconvolution of two time series. We suggest the application of a new method, regularized linear inversion, to the solution of this problem. This method possesses many good features; it imposes no restrictions on the sign of the response function; it can provide clearly defined uncertainty estimates; it involves no guesswork about unmeasured data; it can give a clear indication of when the underlying convolution model is inadequate; and it is computationally very efficient. Using simulated data, we find the minimum S/N and length of the time series in order for this method to work satisfactorily. We also define guidelines for choosing the principal tunable parameter of the method and for interpreting the results. Finally, we reanalyze published data from the 1989 NGC 5548 campaign using this new method and compare the results to those previously obtained by maximum entropy analysis. For some lines we find good agreement, but for others, especially C III lambda(1909) and Si IV lambda(1400), we find significant differences. These can be attributed to the inability of the maximum entropy method to find negative values of the response function, but also illustrate the nonuniqueness of any deconvolution technique. We also find evidence that certain line light curves (e.g., C IV lambda(1549)) cannot be fully described by the simple linear convolution model.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2016-11-10
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Pauli-Villars regularization of field theories on the light front
Hiller, John R.
2010-12-22
Four-dimensional quantum field theories generally require regularization to be well defined. This can be done in various ways, but here we focus on Pauli-Villars (PV) regularization and apply it to nonperturbative calculations of bound states. The philosophy is to introduce enough PV fields to the Lagrangian to regulate the theory perturbatively, including preservation of symmetries, and assume that this is sufficient for the nonperturbative case. The numerical methods usually necessary for nonperturbative bound-state problems are then applied to a finite theory that has the original symmetries. The bound-state problem is formulated as a mass eigenvalue problem in terms of the light-front Hamiltonian. Applications to quantum electrodynamics are discussed.
Temporal Regularity of the Environment Drives Time Perception
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be varied. In one experiment, we asked whether the last stimulus was regular or not. We found that participants exposed to an irregular environment frequently reported perfectly regularly paced stimuli to be irregular. In a second experiment, we asked participants to judge whether the final stimulus was presented before or after a flash. In this way, we were able to determine distortions in temporal perception as changes in the timing necessary for the sound and the flash to be perceived synchronous. We found that within a regular context, the perceived timing of deviant last stimuli changed so that the relative anisochrony appeared to be perceptually decreased. In the irregular context, the perceived timing of irregular stimuli following a regular sequence was not affected. These observations suggest that humans use temporal expectations to evaluate the regularity of sequences and that expectations are combined with sensory stimuli to adapt perceived timing to follow the statistics of the environment. Expectations can be seen as a-priori probabilities on which perceived timing of stimuli depend. PMID:27441686
About the Regularized Navier Stokes Equations
NASA Astrophysics Data System (ADS)
Cannone, Marco; Karch, Grzegorz
2005-03-01
The first goal of this paper is to study the large time behavior of solutions to the Cauchy problem for the 3-dimensional incompressible Navier Stokes system. The Marcinkiewicz space L3,∞ is used to prove some asymptotic stability results for solutions with infinite energy. Next, this approach is applied to the analysis of two classical “regularized” Navier Stokes systems. The first one was introduced by J. Leray and consists in “mollifying” the nonlinearity. The second one was proposed by J.-L. Lions, who added the artificial hyper-viscosity (-Δ)ℓ/ 2, ℓ > 2 to the model. It is shown in the present paper that, in the whole space, solutions to those modified models converge as t → ∞ toward solutions of the original Navier Stokes system.
NASA Astrophysics Data System (ADS)
Cao, Chongsheng
In this dissertation, we deal with different properties of the solutions for several dissipative evolution systems. In one we study the regularity, namely, a Gevrey class regularity of the solution for the nonlinear analytic parabolic equations and Navier-Stokes equations on the two dimensional sphere. We prove the instantaneous Gevrey regularity for these systems. In addition, we provide an estimate for the number of determining modes and nodes for the two dimensional turbulent flows on the sphere. Next, we study the existence and uniqueness of the Lake equations, a special shallow water model of a fluid flow in a shallow basin with varying bottom topography. We show that the global existence of weak solutions for these equations with certain degenerate varying bottom topography, i.e., in the presence of beaches. Later we show the uniqueness for the case of non-degenerate but non-regular topography. Finally, we consider a feedback control problem for the Navier-Stokes equations. Namely, we show that in case one is able to design a linear feedback control that stabilizes a stationary solution to the Galerkin approximating scheme of the Navier-Stokes equations then the same feedback controller is, in fact, stabilizing a near by exact steady state of the closed-loop Navier-Stokes equations. It is worth to stressing that all the conditions of this statement are checkable on the computed Galerkin approximating solution. The same result is also true in the context of nonlinear Galerkin methods, which based on the theory of Approximate Inertial Manifolds, and for various other nonlinear dissipative parabolic systems.
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
Crotts, Arlin P. S.
2009-05-20
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: {approx}50% of reports originate from near Aristarchus, {approx}16% from Plato, {approx}6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that {approx}80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Transient Lunar Phenomena: Regularity and Reality
NASA Astrophysics Data System (ADS)
Crotts, Arlin P. S.
2009-05-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ~50% of reports originate from near Aristarchus, ~16% from Plato, ~6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a "feature" as defined). TLP count consistency for these features indicates that ~80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
On reductibility of degenerate optimization problems to regular operator equations
NASA Astrophysics Data System (ADS)
Bednarczuk, E. M.; Tretyakov, A. A.
2016-12-01
We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.
Phase-regularized polygon computer-generated holograms.
Im, Dajeong; Moon, Eunkyoung; Park, Yohan; Lee, Deokhwan; Hahn, Joonku; Kim, Hwi
2014-06-15
The dark-line defect problem in the conventional polygon computer-generated hologram (CGH) is addressed. To resolve this problem, we clarify the physical origin of the defect and address the concept of phase-regularization. A novel synthesis algorithm for a phase-regularized polygon CGH for generating photorealistic defect-free holographic images is proposed. The optical reconstruction results of the phase-regularized polygon CGHs without the dark-line defects are presented.
Analysis of regularized Navier-Stokes equations. I, II
NASA Technical Reports Server (NTRS)
Ou, Yuh-Roung; Sritharan, S. S.
1991-01-01
A regularized form of the conventional Navier-Stokes equations is analyzed. The global existence and uniqueness are established for two classes of generalized solutions. It is shown that the solution of this regularized system converges to the solution of the conventional Navier-Stokes equations for low Reynolds numbers. Particular attention is given to the structure of attractors characterizing the solutions. Both local and global invariant manifolds are found, and the regularity properties of these manifolds are analyzed.
Hoang, Andre H.; Ruiz-Femenia, Pedro
2006-12-01
We discuss the form and construction of general color singlet heavy particle-antiparticle pair production currents for arbitrary quantum numbers, and issues related to evanescent spin operators and scheme dependences in nonrelativistic QCD in n=3-2{epsilon} dimensions. The anomalous dimensions of the leading interpolating currents for heavy quark and colored scalar pairs in arbitrary {sup 2S+1}L{sub J} angular-spin states are determined at next-to-leading order in the nonrelativistic power counting.
Batygin, Yuri K
2001-06-26
Method of calculation of space charge field of the beam using an expansion of space charge potential and space charge distribution as Fourier-Bessel series is discussed. Coefficients of series are connected by an algebraic equation, which substantially simplifies solution of the problem. Efficiency and accuracy of the method are discussed. Suggested method is effective in multidimensional problems of study of intense charged-particle beams.
Current redistribution in resistor networks: Fat-tail statistics in regular and small-world networks
NASA Astrophysics Data System (ADS)
Lehmann, Jörg; Bernasconi, Jakob
2017-03-01
The redistribution of electrical currents in resistor networks after single-bond failures is analyzed in terms of current-redistribution factors that are shown to depend only on the topology of the network and on the values of the bond resistances. We investigate the properties of these current-redistribution factors for regular network topologies (e.g., d -dimensional hypercubic lattices) as well as for small-world networks. In particular, we find that the statistics of the current redistribution factors exhibits a fat-tail behavior, which reflects the long-range nature of the current redistribution as determined by Kirchhoff's circuit laws.
MATRIX: a 15 ps resistive interpolation TDC ASIC based on a novel regular structure
NASA Astrophysics Data System (ADS)
Mauricio, J.; Gascón, D.; Ciaglia, D.; Gómez, S.; Fernández, G.; Sanuy, A.
2016-12-01
This paper presents a 4-channel TDC ASIC with the following features: 15-ps LSB (9.34 ps after calibration), 10-ps jitter, < 4-ps time resolution, up to 10 MHz of sustained input rate per channel, 45 mW of power consumption and very low area (910×215 μm2) in a commercial 180 nm technology. The main contribution of this work is the novel design of the clock interpolation circuitry based on a resistive interpolation mesh circuit (patented), a two-dimensional regular structure with very good properties in terms of power consumption, area and low process variability.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
Casas, E.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
A family of solutions of a higher order PVI equation near a regular singularity
NASA Astrophysics Data System (ADS)
Shimomura, Shun
2006-09-01
Restriction of the N-dimensional Garnier system to a complex line yields a system of second-order nonlinear differential equations, which may be regarded as a higher order version of the sixth Painlevé equation. Near a regular singularity of the system, we present a 2N-parameter family of solutions expanded into convergent series. These solutions are constructed by iteration, and their convergence is proved by using a kind of majorant series. For simplicity, we describe the proof in the case N = 2.
Fragmentation processes: from irregular mud-cracks to regular polygonal patterns
NASA Astrophysics Data System (ADS)
Jagla, Eduardo; Rojo, Alberto
2000-03-01
We consider an originally irregular pattern of fractures (mud cracks like) at the surface of a half infinite media that cools (or desiccates) from its surface. The fracture pattern advances towards the interior as the material progressively cools down. We show that the tendency of the two dimensional pattern of fractures as a function of depth is to evolve smoothly to polygonal configurations that minimize a free energy functional. Our model explains the origin of regular columnar structures of polygonal section in volcanic lava flows, and also in some desiccation experiments on starches. Statistical analysis of our results compare quite well with those of lava and starch.
NASA Astrophysics Data System (ADS)
Shen, Wenxian; Shen, Zhongwei
2017-03-01
The present paper is devoted to the investigation of various properties of transition fronts in one-dimensional nonlocal equations in heterogeneous media of ignition type, whose existence has been established by the authors of the present paper in a previous work. It is first shown that transition fronts are continuously differentiable in space with uniformly bounded and uniformly Lipschitz continuous space partial derivative. This is the first time that space regularity of transition fronts in nonlocal equations is ever studied. It is then shown that transition fronts are uniformly steep. Finally, asymptotic stability, in the sense of exponentially attracting front-like initial data, of transition fronts is studied.
Regularization of multiplicative iterative algorithms with nonnegative constraint
NASA Astrophysics Data System (ADS)
Benvenuto, Federico; Piana, Michele
2014-03-01
This paper studies the regularization of the constrained maximum likelihood iterative algorithms applied to incompatible ill-posed linear inverse problems. Specifically, we introduce a novel stopping rule which defines a regularization algorithm for the iterative space reconstruction algorithm in the case of least-squares minimization. Further we show that the same rule regularizes the expectation maximization algorithm in the case of Kullback-Leibler minimization, provided a well-justified modification of the definition of Tikhonov regularization is introduced. The performances of this stopping rule are illustrated in the case of an image reconstruction problem in the x-ray solar astronomy.
Green, Daniel; Lawrence, Albion; McGreevy, John; Morrison, David R.; Silverstein, Eva; /SLAC /Stanford U., Phys. Dept.
2007-05-18
We show that string theory on a compact negatively curved manifold, preserving a U(1)b1 winding symmetry, grows at least b1 new effective dimensions as the space shrinks. The winding currents yield a ''D-dual'' description of a Riemann surface of genus h in terms of its 2h dimensional Jacobian torus, perturbed by a closed string tachyon arising as a potential energy term in the worldsheet sigma model. D-branes on such negatively curved manifolds also reveal this structure, with a classical moduli space consisting of a b{sub 1}-torus. In particular, we present an AdS/CFT system which offers a non-perturbative formulation of such supercritical backgrounds. Finally, we discuss generalizations of this new string duality.
Full L1-regularized Traction Force Microscopy over whole cells.
Suñé-Auñón, Alejandro; Jorge-Peñas, Alvaro; Aguilar-Cuenca, Rocío; Vicente-Manzanares, Miguel; Van Oosterwyck, Hans; Muñoz-Barrutia, Arrate
2017-08-10
Traction Force Microscopy (TFM) is a widespread technique to estimate the tractions that cells exert on the surrounding substrate. To recover the tractions, it is necessary to solve an inverse problem, which is ill-posed and needs regularization to make the solution stable. The typical regularization scheme is given by the minimization of a cost functional, which is divided in two terms: the error present in the data or data fidelity term; and the regularization or penalty term. The classical approach is to use zero-order Tikhonov or L2-regularization, which uses the L2-norm for both terms in the cost function. Recently, some studies have demonstrated an improved performance using L1-regularization (L1-norm in the penalty term) related to an increase in the spatial resolution and sensitivity of the recovered traction field. In this manuscript, we present a comparison between the previous two regularization schemes (relying in the L2-norm for the data fidelity term) and the full L1-regularization (using the L1-norm for both terms in the cost function) for synthetic and real data. Our results reveal that L1-regularizations give an improved spatial resolution (more important for full L1-regularization) and a reduction in the background noise with respect to the classical zero-order Tikhonov regularization. In addition, we present an approximation, which makes feasible the recovery of cellular tractions over whole cells on typical full-size microscope images when working in the spatial domain. The proposed full L1-regularization improves the sensitivity to recover small stress footprints. Moreover, the proposed method has been validated to work on full-field microscopy images of real cells, what certainly demonstrates it is a promising tool for biological applications.
Dynamic MRI using SmooThness Regularization on Manifolds (SToRM)
Poddar, Sunrita; Jacob, Mathews
2017-01-01
We introduce a novel algorithm to recover real time dynamic MR images from highly under-sampled k-t space measurements. The proposed scheme models the images in the dynamic dataset as points on a smooth, low dimensional manifold in high dimensional space. We propose to exploit the non-linear and non-local redundancies in the dataset by posing its recovery as a manifold smoothness regularized optimization problem. A navigator acquisition scheme is used to determine the structure of the manifold, or equivalently the associated graph Laplacian matrix. The estimated Laplacian matrix is used to recover the dataset from undersampled measurements. The utility of the proposed scheme is demonstrated by comparisons with state of the art methods in multi-slice real-time cardiac and speech imaging applications. PMID:26685228
NASA Astrophysics Data System (ADS)
Ioan Boţ, Radu; Hein, Torsten
2012-10-01
In this paper, we consider an iterative regularization scheme for linear ill-posed equations in Banach spaces. As opposed to other iterative approaches, we deal with a general penalty functional from Tikhonov regularization and take advantage of the properties of the regularized solutions which where supported by the choice of the specific penalty term. We present convergence and stability results for the presented algorithm. Additionally, we demonstrate how these theoretical results can be applied to L1- and TV-regularization approaches and close the paper with a short numerical example.
Regular and Special Educators Inservice: A Model of Cooperative Effort.
ERIC Educational Resources Information Center
van Duyne, H. John; And Others
The Regular Education Inservice Program (REIT) at Bowling Green State University (Ohio) assists instructional resource centers (IRC's) and local educational agencies (LEA's) in developing and implementing inservice non-degree programs which respond to the mandates of Public Law 94-142. The target population is regular education personnel working…
12 CFR 311.5 - Regular procedure for closing meetings.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Regular procedure for closing meetings. 311.5 Section 311.5 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5 Regular...
39 CFR 3010.7 - Schedule of regular rate changes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 39 Postal Service 1 2011-07-01 2011-07-01 false Schedule of regular rate changes. 3010.7 Section 3010.7 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL REGULATION OF RATES FOR MARKET DOMINANT PRODUCTS General Provisions § 3010.7 Schedule of regular rate changes. (a) The Postal Service shall...
39 CFR 3010.7 - Schedule of regular rate changes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 39 Postal Service 1 2012-07-01 2012-07-01 false Schedule of regular rate changes. 3010.7 Section 3010.7 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL REGULATION OF RATES FOR MARKET DOMINANT PRODUCTS General Provisions § 3010.7 Schedule of regular rate changes. (a) The Postal Service shall...
39 CFR 3010.7 - Schedule of regular rate changes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 39 Postal Service 1 2013-07-01 2013-07-01 false Schedule of regular rate changes. 3010.7 Section 3010.7 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL REGULATION OF RATES FOR MARKET DOMINANT PRODUCTS General Provisions § 3010.7 Schedule of regular rate changes. (a) The Postal Service shall...
20 CFR 216.13 - Regular current connection test.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of the...
20 CFR 216.13 - Regular current connection test.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of the...
The residual method for regularizing ill-posed problems
Grasmair, Markus; Haltmeier, Markus; Scherzer, Otmar
2011-01-01
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances, which can also be applied to non-convex regularization functionals. We provide three examples that show the applicability of our theory. The first example is the regularized solution of linear operator equations on Lp-spaces, where we show that the results of Tikhonov regularization generalize unchanged to the residual method. As a second example, we consider the problem of density estimation from a finite number of sampling points, using the Wasserstein distance as a fidelity term and an entropy measure as regularization term. It is shown that the densities obtained in this way depend continuously on the location of the sampled points and that the underlying density can be recovered as the number of sampling points tends to infinity. Finally, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions. PMID:22345828
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108...
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108...
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108...
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108...
Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears
ERIC Educational Resources Information Center
Chen, Sau-Chin; Hu, Jon-Fan
2015-01-01
Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
Analysis of regularized Navier-Stokes equations, 2
NASA Technical Reports Server (NTRS)
Ou, Yuh-Roung; Sritharan, S. S.
1989-01-01
A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.
Context-Sensitive Regularities in English Vowel Spelling.
ERIC Educational Resources Information Center
Aronoff, Mark; Koch, Eric
1996-01-01
Compares the predictive value of rime spellings in English to other types of regularities beyond the level of the single letter. Computer-analyzes a list of 24,000 written words, each paired with its corresponding pronunciation. Reveals that only a small number of rime spellings are highly regular in pronunciations. Suggests English spelling is…
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 3 2013-07-01 2013-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 553.233 - “Regular rate” defined.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false âRegular rateâ defined. 553.233 Section 553.233 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS APPLICATION... Enforcement Employees of Public Agencies Overtime Compensation Rules § 553.233 “Regular rate” defined. The...
29 CFR 553.233 - “Regular rate” defined.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false âRegular rateâ defined. 553.233 Section 553.233 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS APPLICATION... Enforcement Employees of Public Agencies Overtime Compensation Rules § 553.233 “Regular rate” defined. The...