Science.gov

Sample records for dimensionally regularized polyakov

  1. Dimensional Regularization is Generic

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo

    The absence of the quadratic divergence in the Higgs sector of the Standard Model in the dimensional regularization is usually regarded to be an exceptional property of a specific regularization. To understand what is going on in the dimensional regularization, we illustrate how to reproduce the results of the dimensional regularization for the λϕ4 theory in the more conventional regularization such as the higher derivative regularization; the basic postulate involved is that the quadratically divergent induced mass, which is independent of the scale change of the physical mass, is kinematical and unphysical. This is consistent with the derivation of the Callan-Symanzik equation, which is a comparison of two theories with slightly different masses, for the λϕ4 theory without encountering the quadratic divergence. In this sense the dimensional regularization may be said to be generic in a bottom-up approach starting with a successful low energy theory. We also define a modified version of the mass independent renormalization for a scalar field which leads to the homogeneous renormalization group equation. Implications of the present analysis on the Standard Model at high energies and the presence or absence of SUSY at LHC energies are briey discussed.

  2. Physical model of dimensional regularization

    NASA Astrophysics Data System (ADS)

    Schonfeld, Jonathan F.

    2016-12-01

    We explicitly construct fractals of dimension 4{-}ɛ on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity.

  3. Dimensional regularization in configuration space

    SciTech Connect

    Bollini, C.G. |; Giambiagi, J.J.

    1996-05-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}

  4. Multiloop integrals in dimensional regularization made simple.

    PubMed

    Henn, Johannes M

    2013-06-21

    Scattering amplitudes at loop level can be expressed in terms of Feynman integrals. The latter satisfy partial differential equations in the kinematical variables. We argue that a good choice of basis for (multi)loop integrals can lead to significant simplifications of the differential equations, and propose criteria for finding an optimal basis. This builds on experience obtained in supersymmetric field theories that can be applied successfully to generic quantum field theory integrals. It involves studying leading singularities and explicit integral representations. When the differential equations are cast into canonical form, their solution becomes elementary. The class of functions involved is easily identified, and the solution can be written down to any desired order in ϵ within dimensional regularization. Results obtained in this way are particularly simple and compact. In this Letter, we outline the general ideas of the method and apply them to a two-loop example.

  5. Lattice calculation of the Polyakov loop and Polyakov loop correlators

    NASA Astrophysics Data System (ADS)

    Weber, Johannes Heinrich

    2017-03-01

    We discuss calculations of the Polyakov loop and of Polyakov loop correlators using lattice gauge theory. We simulate QCD with 2+1 flavors and almost physical quark masses using the highly improved staggered quark action (HISQ).We demonstrate that the entropy derived from the Polyakov loop is a good probe of color screening. In particular, it allows for scheme independent and quantitative conclusions about the deconfinement aspects of the crossover and for a rigorous study of the onset of weak-coupling behavior at high temperatures. We examine the correlators for small and large separations and identify vacuum-like and screening regimes in the thermal medium. We demonstrate that gauge-independent screening properties can be obtained even from gauge-fixed singlet correlators and that we can pin down the asymptotic regime.

  6. Lifshitz anomalies, Ward identities and split dimensional regularization

    NASA Astrophysics Data System (ADS)

    Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia

    2017-03-01

    We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2 + 1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.

  7. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  8. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  9. Deterministic regularization of three-dimensional optical diffraction tomography

    PubMed Central

    Sung, Yongjin; Dasari, Ramachandra R.

    2012-01-01

    In this paper we discuss a deterministic regularization algorithm to handle the missing cone problem of three-dimensional optical diffraction tomography (ODT). The missing cone problem arises in most practical applications of ODT and is responsible for elongation of the reconstructed shape and underestimation of the value of the refractive index. By applying positivity and piecewise-smoothness constraints in an iterative reconstruction framework, we effectively suppress the missing cone artifact and recover sharp edges rounded out by the missing cone, and we significantly improve the accuracy of the predictions of the refractive index. We also show the noise handling capability of our algorithm in the reconstruction process. PMID:21811316

  10. Matching effective chiral Lagrangians with dimensional and lattice regularizations

    NASA Astrophysics Data System (ADS)

    Niedermayer, F.; Weisz, P.

    2016-04-01

    We compute the free energy in the presence of a chemical potential coupled to a conserved charge in effective O( n) scalar field theory (without explicit symmetry breaking terms) to third order for asymmetric volumes in general d-dimensions, using dimensional (DR) and lattice regularizations. This yields relations between the 4-derivative couplings appearing in the effective actions for the two regularizations, which in turn allows us to translate results, e.g. the mass gap in a finite periodic box in d = 3 + 1 dimensions, from one regularization to the other. Consistency is found with a new direct computation of the mass gap using DR. For the case n = 4 , d = 4 the model is the low-energy effective theory of QCD with N f = 2 massless quarks. The results can thus be used to obtain estimates of low energy constants in the effective chiral Lagrangian from measurements of the low energy observables, including the low lying spectrum of N f = 2 QCD in the δ-regime using lattice simulations, as proposed by Peter Hasenfratz, or from the susceptibility corresponding to the chemical potential used.

  11. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D - d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D - 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  12. Effective potential for Polyakov loops in lattice QCD

    NASA Astrophysics Data System (ADS)

    Nemoto, Y.; RBC Collaboration

    2003-05-01

    Toward the derivation of an effective theory for Polyakov loops in lattice QCD, we examine Polyakov loop correlation functions using the multi-level algorithm which was recently developed by Luscher and Weisz.

  13. Polyakov loop and correlator of Polyakov loops at next-to-next-to-leading order

    SciTech Connect

    Brambilla, Nora; Vairo, Antonio; Ghiglieri, Jacopo; Petreczky, Peter

    2010-10-01

    We study the Polyakov loop and the correlator of two Polyakov loops at finite temperature in the weak-coupling regime. We calculate the Polyakov loop at order g{sup 4}. The calculation of the correlator of two Polyakov loops is performed at distances shorter than the inverse of the temperature and for electric screening masses larger than the Coulomb potential. In this regime, it is accurate up to order g{sup 6}. We also evaluate the Polyakov-loop correlator in an effective field theory framework that takes advantage of the hierarchy of energy scales in the problem and makes explicit the bound-state dynamics. In the effective field theory framework, we show that the Polyakov-loop correlator is at leading order in the multipole expansion the sum of a color-singlet and a color-octet quark-antiquark correlator, which are gauge invariant, and compute the corresponding color-singlet and color-octet free energies.

  14. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  15. Critical phenomena in the majority voter model on two-dimensional regular lattices.

    PubMed

    Acuña-Lara, Ana L; Sastre, Francisco; Vargas-Arriola, José Raúl

    2014-05-01

    In this work we studied the critical behavior of the critical point as a function of the number of nearest neighbors on two-dimensional regular lattices. We performed numerical simulations on triangular, hexagonal, and bilayer square lattices. Using standard finite-size scaling theory we found that all cases fall in the two-dimensional Ising model universality class, but that the critical point value for the bilayer lattice does not follow the regular tendency that the Ising model shows.

  16. Managing γ 5 in Dimensional Regularization II: the Trace with more γ 5's

    NASA Astrophysics Data System (ADS)

    Ferrari, Ruggero

    2017-03-01

    In the present paper we evaluate the anomaly for the abelian axial current in a non abelian chiral gauge theory, by using dimensional regularization. This amount to formulate a procedure for managing traces with more than one γ 5. The suggested procedure obeys Lorentz covariance and cyclicity, at variance with previous approaches (e.g. the celebrated 't Hooft and Veltman's where Lorentz is violated). The result of the present paper is a further step forward in the program initiated by a previous work on the traces involving a single γ 5. The final goal is an unconstrained definition of γ 5 in dimensional regularization. Here, in the evaluation of the anomaly, we profit of the axial current conservation equation, when radiative corrections are neglected. This kind of tool is not always exploited in field theories with γ 5, e.g. in the use of dimensional regularization of infrared and collinear divergences.

  17. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    PubMed

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.

  18. Nonet meson properties in the Nambu-Jona-Lasinio model with dimensional versus cutoff regularization

    SciTech Connect

    Inagaki, T.; Kimura, D.; Kohyama, H.; Kvinikhidze, A.

    2011-02-01

    The Nambu-Jona-Lasinio model with a Kobayashi-Maskawa-'t Hooft term is one low energy effective theory of QCD which includes the U{sub A}(1) anomaly. We investigate nonet meson properties in this model with three flavors of quarks. We employ two types of regularizations, the dimensional and sharp cutoff ones. The model parameters are fixed phenomenologically for each regularization. Evaluating the kaon decay constant, the {eta} meson mass and the topological susceptibility, we show the regularization dependence of the results and discuss the applicability of the Nambu-Jona-Lasinio model.

  19. On the Global Regularity of the Two-Dimensional Density Patch for Inhomogeneous Incompressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Liao, Xian; Zhang, Ping

    2016-06-01

    Regarding P.-L. Lions' open question in Oxford Lecture Series in Mathematics and its Applications, Vol. 3 (1996) concerning the propagation of regularity for the density patch, we establish the global existence of solutions to the two-dimensional inhomogeneous incompressible Navier-Stokes system with initial density given by {(1 - η){1}_{{Ω}0} + {1}_{{Ω}0c}} for some small enough constant {η} and some {W^{k+2,p}} domain {Ω0}, with initial vorticity belonging to {L1 \\cap Lp} and with appropriate tangential regularities. Furthermore, we prove that the regularity of the domain {Ω_0} is preserved by time evolution.

  20. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  1. Phase structure of the Polyakov-quark-meson model

    NASA Astrophysics Data System (ADS)

    Schaefer, B.-J.; Pawlowski, J. M.; Wambach, J.

    2007-10-01

    The relation between the deconfinement and chiral phase transition is explored in the framework of a Polyakov-loop-extended two-flavor quark-meson (PQM) model. In this model the Polyakov loop dynamics is represented by a background temporal gauge field which also couples to the quarks. As a novelty an explicit quark chemical potential and Nf-dependence in the Polyakov loop potential is proposed by using renormalization group arguments. The behavior of the Polyakov loop as well as the chiral condensate as function of temperature and quark chemical potential is obtained by minimizing the grand canonical thermodynamic potential of the system. The effect of the Polyakov loop dynamics on the chiral phase diagram and on several thermodynamic bulk quantities is presented.

  2. High density quark matter in the Nambu-Jona-Lasinio model with dimensional versus cutoff regularization

    SciTech Connect

    Fujihara, T.; Kimura, D.; Inagaki, T.; Kvinikhidze, A.

    2009-05-01

    We investigate color superconducting phase at high density in the extended Nambu-Jona-Lasinio model for two-flavor quarks. Because of the nonrenormalizability of the model, physical observables may depend on the regularization procedure; that is why we apply two types of regularization, the cutoff and the dimensional one to evaluate the phase structure, the equation of state, and the relationship between the mass and the radius of a dense star. To obtain the phase structure we evaluate the minimum of the effective potential at finite temperature and chemical potential. The stress tensor is calculated to derive the equation of state. Solving the Tolman-Oppenheimer-Volkoff equation, we show the relationship between the mass and the radius of a dense star. The dependence on the regularization is found not to be small, interestingly. The dimensional regularization predicts color superconductivity phase at rather large values of {mu} (in agreement with perturbative QCD in contrast to the cutoff regularization), in the larger temperature interval, the existence of heavier and larger quark stars.

  3. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  4. Regularized Regression Versus the High-Dimensional Propensity Score for Confounding Adjustment in Secondary Database Analyses.

    PubMed

    Franklin, Jessica M; Eddings, Wesley; Glynn, Robert J; Schneeweiss, Sebastian

    2015-10-01

    Selection and measurement of confounders is critical for successful adjustment in nonrandomized studies. Although the principles behind confounder selection are now well established, variable selection for confounder adjustment remains a difficult problem in practice, particularly in secondary analyses of databases. We present a simulation study that compares the high-dimensional propensity score algorithm for variable selection with approaches that utilize direct adjustment for all potential confounders via regularized regression, including ridge regression and lasso regression. Simulations were based on 2 previously published pharmacoepidemiologic cohorts and used the plasmode simulation framework to create realistic simulated data sets with thousands of potential confounders. Performance of methods was evaluated with respect to bias and mean squared error of the estimated effects of a binary treatment. Simulation scenarios varied the true underlying outcome model, treatment effect, prevalence of exposure and outcome, and presence of unmeasured confounding. Across scenarios, high-dimensional propensity score approaches generally performed better than regularized regression approaches. However, including the variables selected by lasso regression in a regular propensity score model also performed well and may provide a promising alternative variable selection method.

  5. Visualizations of coherent center domains in local Polyakov loops

    SciTech Connect

    Stokes, Finn M. Kamleh, Waseem; Leinweber, Derek B.

    2014-09-15

    Quantum Chromodynamics exhibits a hadronic confined phase at low to moderate temperatures and, at a critical temperature T{sub C}, undergoes a transition to a deconfined phase known as the quark–gluon plasma. The nature of this deconfinement phase transition is probed through visualizations of the Polyakov loop, a gauge independent order parameter. We produce visualizations that provide novel insights into the structure and evolution of center clusters. Using the HMC algorithm the percolation during the deconfinement transition is observed. Using 3D rendering of the phase and magnitude of the Polyakov loop, the fractal structure and correlations are examined. The evolution of the center clusters as the gauge fields thermalize from below the critical temperature to above it are also exposed. We observe deconfinement proceeding through a competition for the dominance of a particular center phase. We use stout-link smearing to remove small-scale noise in order to observe the large-scale evolution of the center clusters. A correlation between the magnitude of the Polyakov loop and the proximity of its phase to one of the center phases of SU(3) is evident in the visualizations. - Highlights: • We produce visualizations of center clusters in Polyakov loops. • The evolution of center clusters with HMC simulation time is examined. • Visualizations provide novel insights into the percolation of center clusters. • The magnitude and phase of the Polyakov loop are studied. • A correlation between the magnitude and center phase proximity is evident.

  6. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  7. Optically programmable encoder based on light propagation in two-dimensional regular nanoplates.

    PubMed

    Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin

    2017-04-07

    We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as '1', than the dark section, defined as '0', along the edge. These '0' and '1' are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.

  8. Optically programmable encoder based on light propagation in two-dimensional regular nanoplates

    NASA Astrophysics Data System (ADS)

    Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin

    2017-04-01

    We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as ‘1’, than the dark section, defined as ‘0’, along the edge. These ‘0’ and ‘1’ are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.

  9. Local feedback regularization of three-dimensional Navier-Stokes equations on bounded domains

    NASA Astrophysics Data System (ADS)

    Balogh, Andras

    One of the outstanding open problems in applied mathematics is the question of well-posedness of the initial boundary value problem associated with the three-dimensional fluid flow. At the same time, due to important applications in control theory, numerical analysis and turbulence, various types of regularizations and controls are gaining new interest. The specific problem we consider here is inspired by recent advances in the control of nonlinear distributed parameter systems and its possible applications to hydrodynamics. The main objective is to investigate the extent to which the 3-dimensional Navier-Stokes system can be regularized using a particular, physically motivated, feedback control law. The feedback is introduced in the form of an additional nonlinear viscosity term. Since control over the whole domain is not feasible in general, i.e., it is not usually possible to measure the entire state of the system, we consider a feedback supported only on a subdomain. On the rest of the domain the classical Navier-Stokes equations govern the fluid flow. The additional viscosity term is physically meaningful in the sense that it is proportional to the energy dissipation functional on the subdomain. For the controlled system we prove the existence, uniqueness and stability of the strong solution for initial data and forcing term which are arbitrary on the subdomain of control and are sufficiently small (in appropriate function spaces) outside this subdomain.

  10. Regularization of the two-dimensional filter diagonalization method: FDM2K

    PubMed

    Chen; Mandelshtam; Shaka

    2000-10-01

    We outline an important advance in the problem of obtaining a two-dimensional (2D) line list of the most prominent features in a 2D high-resolution NMR spectrum in the presence of noise, when using the Filter Diagonalization Method (FDM) to sidestep limitations of conventional FFT processing. Although respectable absorption-mode spectra have been obtained previously by the artifice of "averaging" several FDM calculations, no 2D line list could be directly obtained from the averaged spectrum, and each calculation produced numerical artifacts that were demonstrably inconsistent with the measured data, but which could not be removed a posteriori. By regularizing the intrinsically ill-defined generalized eigenvalue problem that FDM poses, in a particular quite plausible way, features that are weak or stem from numerical problems are attenuated, allowing better characterization of the dominant spectral features. We call the new algorithm FDM2K. Copyright 2000 Academic Press.

  11. Mechanics of shear banding in a regularized two-dimensional model of a granular medium

    NASA Astrophysics Data System (ADS)

    Hunt, G. W.; Hammond, J.

    2012-10-01

    A regularized two-dimensional model for the buckling of force chains is presented, comprising identical rigid discs sitting initially in a conventional close-packed arrangement. As linear elastic constitutive laws are used throughout, the only nonlinearity in the system comes from large rotations as the resulting force chains are obliged to buckle under imposed end-shortening. The evolving deflected shapes are seen to develop and interact in a highly complex bifurcation structure. Analysis by the nonlinear continuation code Auto exposes at realistic load levels an energy landscape rich in local minima. A number of such states are identified, amongst them families of solutions with the familiar appearance of shear bands over a finite number of discs. A well-known "snakes and ladders" pattern is identified as the mechanism for the addition of extra discs to increase the width of the band.

  12. Seiberg-Witten and 'Polyakov-like' Magnetic Bion Confinements are Continuously Connected

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2012-06-01

    We study four-dimensional N = 2 supersymmetric pure-gauge (Seiberg-Witten) theory and its N = 1 mass perturbation by using compactification on S{sup 1} x R{sup 3}. It is well known that on R{sup 4} (or at large S{sup 1} size L) the perturbed theory realizes confinement through monopole or dyon condensation. At small S{sup 1}, we demonstrate that confinement is induced by a generalization of Polyakov's three-dimensional instanton mechanism to a locally four-dimensional theory - the magnetic bion mechanism - which also applies to a large class of nonsupersymmetric theories. Using a large- vs. small-L Poisson duality, we show that the two mechanisms of confinement, previously thought to be distinct, are in fact continuously connected.

  13. Duality and the Polyakov N-point Green's function

    SciTech Connect

    Nepomechie, R.I.

    1982-05-15

    Recently Polyakov proposed an N-point Green's function G(p/sub 1/,...,p/sub N/) for closed strings. We consider the problem of finding poles in the external momenta and extracting on-shell scattering amplitudes from G. Moreover, we find that G is invariant under complex Moebius transformations, and is presumably dual.

  14. RENORMALIZATION OF POLYAKOV LOOPS IN FUNDAMENTAL AND HIGHER REPRESENTATIONS

    SciTech Connect

    KACZMAREK,O.; GUPTA, S.; HUEBNER, K.

    2007-07-30

    We compare two renormalization procedures, one based on the short distance behavior of heavy quark-antiquark free energies and the other by using bare Polyakov loops at different temporal entent of the lattice and find that both prescriptions are equivalent, resulting in renormalization constants that depend on the bare coupling. Furthermore these renormalization constants show Casimir scaling for higher representations of the Polyakov loops. The analysis of Polyakov loops in different representations of the color SU(3) group indicates that a simple perturbative inspired relation in terms of the quadratic Casimir operator is realized to a good approximation at temperatures T{approx}>{Tc}, for renormalized as well as bare loops. In contrast to a vanishing Polyakov loop in representations with non-zero triality in the confined phase, the adjoint loops are small but non-zero even for temperatures below the critical one. The adjoint quark-antiquark pairs exhibit screening. This behavior can be related to the binding energy of glue-lump states.

  15. Duality and the Knizhnik-Polyakov-Zamolodchikov relation in Liouville quantum gravity.

    PubMed

    Duplantier, Bertrand; Sheffield, Scott

    2009-04-17

    We present a (mathematically rigorous) probabilistic and geometrical proof of the Knizhnik-Polyakov-Zamolodchikov relation between scaling exponents in a Euclidean planar domain D and in Liouville quantum gravity. It uses the properly regularized quantum area measure dmicro_{gamma}=epsilon;{gamma;{2}/2}e;{gammah_{epsilon}(z)}dz, where dz is the Lebesgue measure on D, gamma is a real parameter, 02 is shown to be related to the quantum measure dmu_{gamma;{'}}, gamma;{'}<2, by the fundamental duality gammagamma;{'}=4.

  16. Globally regular instability of 3-dimensional anti-de Sitter spacetime.

    PubMed

    Bizoń, Piotr; Jałmużna, Joanna

    2013-07-26

    We consider three-dimensional anti-de Sitter (AdS) gravity minimally coupled to a massless scalar field and study numerically the evolution of small smooth circularly symmetric perturbations of the AdS3 spacetime. As in higher dimensions, for a large class of perturbations, we observe a turbulent cascade of energy to high frequencies which entails instability of AdS3. However, in contrast to higher dimensions, the cascade cannot be terminated by black hole formation because small perturbations have energy below the black hole threshold. This situation appears to be challenging for the cosmic censor. Analyzing the energy spectrum of the cascade we determine the width ρ(t) of the analyticity strip of solutions in the complex spatial plane and argue by extrapolation that ρ(t) does not vanish in finite time. This provides evidence that the turbulence is too weak to produce a naked singularity and the solutions remain globally regular in time, in accordance with the cosmic censorship hypothesis.

  17. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  18. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    PubMed Central

    Yao, Bing; Yang, Hui

    2016-01-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods. PMID:27966576

  19. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem.

    PubMed

    Yao, Bing; Yang, Hui

    2016-12-14

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  20. Polyakov-Nambu-Jona-Lasinio model in finite volumes

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Abhijit; Ghosh, Sanjay K.; Ray, Rajarshi; Saha, Kinkar; Upadhaya, Sudipa

    2016-12-01

    We discuss the 2+1 flavor Polyakov loop enhanced Nambu-Jona-Lasinio model in a finite volume. The main objective is to check the volume scaling of thermodynamic observables for various temperatures and chemical potentials. We observe the possible violation of the scaling with system size in a considerable window along the whole transition region in the T\\text-μq plane.

  1. Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis

    NASA Astrophysics Data System (ADS)

    Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.

    2005-03-01

    The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..

  2. Two-dimensional encoder with picometre resolution using lattice spacing on regular crystalline surface as standard

    NASA Astrophysics Data System (ADS)

    Aketagawa, Masato; Honda, Hiroshi; Ishige, Masashi; Patamaporn, Chaikool

    2007-02-01

    A two-dimensional (2D) encoder with picometre resolution using multi-tunnelling-probes scanning tunnelling microscope (MTP-STM) as detector units and a regular crystalline lattice as a reference is proposed. In experiments to demonstrate the method, a highly oriented pyrolytic graphite (HOPG) crystal is utilized as the reference. The MTP-STM heads, which are set upon a sample stage, observe multi-points which satisfy some relationship on the HOPG crystalline surface on the sample stage, and the relative 2D displacement between the MTP-STM heads and the sample stage can be determined from the multi-current signals of the multi-points. Two unit lattice vectors on the HOPG crystalline surface with length and intersection angle of 0.246 nm and 60°, respectively, are utilized as 2D displacement references. 2D displacement of the sample stage on which the HOPG crystal is placed can be calculated using the linear sum of the two unit lattice vectors, derived from a linear operation of the multi-current signals. Displacement interpolation less than the lattice spacing of the HOPG crystal can also be performed. To determine the linear sum of the two unit vectors as the 2D displacement, the multi-points to be observed with the MTP-STM must be properly positioned according to the 2D atomic structure of the HOPG crystal. In the experiments, the proposed method is compared with a capacitance sensor whose resolution is improved to approximately 0.1 nm by limiting the sensor's bandwidth to 300 Hz. In order to obtain suitable multi-current signals of the properly positioned multi-points in semi-real-time, lateral dither modulations are applied to the STM probes. The results show that the proposed method has the capability to measure 2D lateral displacements with a resolution on the order of 10 pm with a maximum measurement speed of 100 nm s-1 or more.

  3. Theory of inversion with two-dimensional regularization: profiles of microphysical particle properties derived from multiwavelength lidar measurements.

    PubMed

    Kolgotin, Alexei; Müller, Detlef

    2008-09-01

    We present the theory of inversion with two-dimensional regularization. We use this novel method to retrieve profiles of microphysical properties of atmospheric particles from profiles of optical properties acquired with multiwavelength Raman lidar. This technique is the first attempt to the best of our knowledge, toward an operational inversion algorithm, which is strongly needed in view of multiwavelength Raman lidar networks. The new algorithm has several advantages over the inversion with so-called classical one-dimensional regularization. Extensive data postprocessing procedures, which are needed to obtain a sensible physical solution space with the classical approach, are reduced. Data analysis, which strongly depends on the experience of the operator, is put on a more objective basis. Thus, we strongly increase unsupervised data analysis. First results from simulation studies show that the new methodology in many cases outperforms our old methodology regarding accuracy of retrieved particle effective radius, and number, surface-area, and volume concentration. The real and the imaginary parts of the complex refractive index can be estimated with at least as equal accuracy as with our old method of inversion with one-dimensional regularization. However, our results on retrieval accuracy still have to be verified in a much larger simulation study.

  4. From chiral quark dynamics with Polyakov loop to the hadron resonance gas model

    SciTech Connect

    Arriola, E. R.; Salcedo, L. L.; Megias, E.

    2013-03-25

    Chiral quark models with Polyakov loop at finite temperature have been often used to describe the phase transition. We show how the transition to a hadron resonance gas is realized based on the quantum and local nature of the Polyakov loop.

  5. Three-dimensional quantitative microwave imaging of realistic numerical breast phantoms using Huber regularization.

    PubMed

    Bai, Funing; Franchois, Ann; De Zaeytijd, Jurgen; Pižurica, Aleksandra

    2013-01-01

    Breast tumor detection with microwaves is based on the difference in dielectric properties between normal and malignant tissues. The complex permittivity reconstruction of inhomogeneous dielectric biological tissues from microwave scattering is a nonlinear, ill-posed inverse problem. We proposed to use the Huber regularization in our previous work where some preliminary results for piecewise constant objects were shown. In this paper, we employ the Huber function as regularization in the even more challenging 3D piecewise continuous case of a realistic numerical breast phantom. The resulting reconstructions of complex permittivity profiles indicate potential for biomedical imaging.

  6. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  7. Arbitrary parameters in implicit regularization and democracy within perturbative description of 2-dimensional gravitational anomalies

    NASA Astrophysics Data System (ADS)

    Souza, Leonardo A. M.; Sampaio, Marcos; Nemes, M. C.

    2006-01-01

    We show that the Implicit Regularization Technique is useful to display quantum symmetry breaking in a complete regularization independent fashion. Arbitrary parameters are expressed by finite differences between integrals of the same superficial degree of divergence whose value is fixed on physical grounds (symmetry requirements or phenomenology). We study Weyl fermions on a classical gravitational background in two dimensions and show that, assuming Lorentz symmetry, the Weyl and Einstein Ward identities reduce to a set of algebraic equations for the arbitrary parameters which allows us to study the Ward identities on equal footing. We conclude in a renormalization independent way that the axial part of the Einstein Ward identity is always violated. Moreover whereas we can preserve the pure tensor part of the Einstein Ward identity at the expense of violating the Weyl Ward identities we may as well violate the former and preserve the latter.

  8. Regularization of two-dimensional supersymmetric Yang-Mills theory via non-commutative geometry

    NASA Astrophysics Data System (ADS)

    Valavane, K.

    2000-11-01

    The non-commutative geometry is a possible framework to regularize quantum field theory in a non-perturbative way. This idea is an extension of the lattice approximation by non-commutativity that allows us to preserve symmetries. The supersymmetric version is also studied and more precisely in the case of the Schwinger model on a supersphere. This paper is a generalization of this latter work to more general gauge groups.

  9. Two-loop electroweak corrections to Higgs-gluon couplings to higher orders in the dimensional regularization parameter

    NASA Astrophysics Data System (ADS)

    Bonetti, Marco; Melnikov, Kirill; Tancredi, Lorenzo

    2017-03-01

    We compute the two-loop electroweak correction to the production of the Higgs boson in gluon fusion to higher orders in the dimensional-regularization parameter ε = (d - 4) / 2. We employ the method of differential equations augmented by the choice of a canonical basis to compute the relevant integrals and express them in terms of Goncharov polylogarithms. Our calculation provides useful results for the computation of the NLO mixed QCD-electroweak corrections to gg → H and establishes the necessary framework towards the calculation of the missing three-loop virtual corrections.

  10. Assembly of the most topologically regular two-dimensional micro and nanocrystals with spherical, conical, and tubular shapes.

    PubMed

    Roshal, D S; Konevtsova, O V; Myasnikova, A E; Rochal, S B

    2016-11-01

    We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.

  11. Assembly of the most topologically regular two-dimensional micro and nanocrystals with spherical, conical, and tubular shapes

    NASA Astrophysics Data System (ADS)

    Roshal, D. S.; Konevtsova, O. V.; Myasnikova, A. E.; Rochal, S. B.

    2016-11-01

    We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.

  12. Reparametrizing the Polyakov-Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Abhijit; Ghosh, Sanjay K.; Maity, Soumitra; Raha, Sibaji; Ray, Rajarshi; Saha, Kinkar; Upadhaya, Sudipa

    2017-03-01

    The Polyakov-Nambu-Jona-Lasinio model has been quite successful in describing various qualitative features of observables for strongly interacting matter, that are measurable in heavy-ion collision experiments. The question still remains on the quantitative uncertainties in the model results. Such an estimation is possible only by contrasting these results with those obtained from first principles using the lattice QCD framework. Recently a variety of lattice QCD data were reported in the realistic continuum limit. Here we make a first attempt at reparametrizing the model so as to reproduce these lattice data. We find excellent quantitative agreement for the equation of state. Certain discrepancies in the charge and strangeness susceptibilities as well as baryon-charge correlation still remain. We discuss their causes and outline possible directions to remove them.

  13. Fast ultrasound beam prediction for linear and regular two-dimensional arrays.

    PubMed

    Hlawitschka, Mario; McGough, Robert J; Ferrara, Katherine W; Kruse, Dustin E

    2011-09-01

    Real-time beam predictions are highly desirable for the patient-specific computations required in ultrasound therapy guidance and treatment planning. To address the longstanding issue of the computational burden associated with calculating the acoustic field in large volumes, we use graphics processing unit (GPU) computing to accelerate the computation of monochromatic pressure fields for therapeutic ultrasound arrays. In our strategy, we start with acceleration of field computations for single rectangular pistons, and then we explore fast calculations for arrays of rectangular pistons. For single-piston calculations, we employ the fast near-field method (FNM) to accurately and efficiently estimate the complex near-field wave patterns for rectangular pistons in homogeneous media. The FNM is compared with the Rayleigh-Sommerfeld method (RSM) for the number of abscissas required in the respective numerical integrations to achieve 1%, 0.1%, and 0.01% accuracy in the field calculations. Next, algorithms are described for accelerated computation of beam patterns for two different ultrasound transducer arrays: regular 1-D linear arrays and regular 2-D linear arrays. For the array types considered, the algorithm is split into two parts: 1) the computation of the field from one piston, and 2) the computation of a piston-array beam pattern based on a pre-computed field from one piston. It is shown that the process of calculating an array beam pattern is equivalent to the convolution of the single-piston field with the complex weights associated with an array of pistons. Our results show that the algorithms for computing monochromatic fields from linear and regularly spaced arrays can benefit greatly from GPU computing hardware, exceeding the performance of an expensive CPU by more than 100 times using an inexpensive GPU board. For a single rectangular piston, the FNM method facilitates volumetric computations with 0.01% accuracy at rates better than 30 ns per field point

  14. Experimental investigation of thermal structures in regular three-dimensional falling films

    NASA Astrophysics Data System (ADS)

    Rietz, M.; Rohlfs, W.; Kneer, R.; Scheid, B.

    2015-03-01

    Interfacial waves on the surface of a falling liquid film are known to modify heat and mass transfer. Under non-isothermal conditions, the wave topology is strongly influenced by the presence of thermocapillary (Marangoni) forces at the interface which leads to a destabilization of the film flow and potentially to critical film thinning. In this context, the present study investigates the evolution of the surface topology and the evolution of the surface temperature for the case of regularly excited solitary-type waves on a falling liquid film under the influence of a wall-side heat flux. Combining film thickness (chromatic confocal imaging) and surface temperature information (infrared thermography), interactions between hydrodynamics and thermocapillary forces are revealed. These include the formation of rivulets, film thinning and wave number doubling in spanwise direction. Distinct thermal structures on the films' surface can be associated to characteristics of the surface topology.

  15. One-dimensional diffusion problem with not strengthened regular boundary conditions

    NASA Astrophysics Data System (ADS)

    Orazov, I.; Sadybekov, M. A.

    2015-11-01

    In this paper we consider one family of problems simulating the determination of target components and density of sources from given values of the initial and final states. The mathematical statement of these problems leads to the inverse problem for the diffusion equation, where it is required to find not only a solution of the problem, but also its right-hand side that depends only on a spatial variable. One of specific features of the considered problems is that the system of eigenfunctions of the multiple differentiation operator subject to boundary conditions does not have the basis property. We prove the existence and uniqueness of classical solutions of the problem, solving the problem independently of whether the corresponding spectral problem (for the operator of multiple differentiation with not strengthened regular boundary conditions) has a basis of generalized eigenfunctions.

  16. A note on the dimensional regularization of the Standard Model coupled with quantum gravity

    NASA Astrophysics Data System (ADS)

    Anselmi, Damiano

    2004-08-01

    In flat space, γ5 and the epsilon tensor break the dimensionally continued Lorentz symmetry, but propagators have fully Lorentz invariant denominators. When the Standard Model is coupled with quantum gravity γ5 breaks the continued local Lorentz symmetry. I show how to deform the Einstein Lagrangian and gauge-fix the residual local Lorentz symmetry so that the propagators of the graviton, the ghosts and the BRST auxiliary fields have fully Lorentz invariant denominators. This makes the calculation of Feynman diagrams more efficient.

  17. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    NASA Astrophysics Data System (ADS)

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-09-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network.

  18. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  19. Three-dimensional regular arrangement of the annular ligament of the rat stapediovestibular joint.

    PubMed

    Ohashi, Mitsuru; Ide, Soyuki; Kimitsuki, Takashi; Komune, Shizuo; Suganuma, Tatsuo

    2006-03-01

    The stapes footplate articulates with the vestibular window through the annular ligament. This articulation is known as the stapediovestibular joint (SVJ). We investigated the ultrastructure of adult rat SVJ and report here on the characteristic ultrastructure of the corresponding annular ligament. Transmission electron microscopy showed that this annular ligament comprises thick ligament fibers consisting of a peripheral mantle of microfibrils and an electron-lucent central amorphous substance that is regularly arranged in a linear fashion, forming laminated structures parallel to the horizontal plane of the SVJ. Scanning electron microscopy revealed that transverse microfibrils cross the thick ligament fibers, showing a lattice-like structure. The annular ligament was vividly stained with elastica van Gieson's stain and the Verhoeff's iron hematoxylin method. Staining of the electron-lucent central amorphous substance of the thick ligament fibers by the tannate-metal salt method revealed an intense electron density. These results indicate that the annular ligament of the SVJ is mainly composed of mature elastic fibers.

  20. Regularization Method for Predicting an Ordinal Response Using Longitudinal High-dimensional Genomic Data

    PubMed Central

    Hou, Jiayi

    2015-01-01

    An ordinal scale is commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical methodology based on statistical inference, in particular, ordinal modeling has contributed to the analysis of data in which the response categories are ordered and the number of covariates (p) remains smaller than the sample size (n). With the emergence of genomic technologies being increasingly applied for more accurate diagnosis and prognosis, high-dimensional data where the number of covariates (p) is much larger than the number of samples (n), are generated. To meet the emerging needs, we introduce our proposed model which is a two-stage algorithm: Extend the Generalized Monotone Incremental Forward Stagewise (GMIFS) method to the cumulative logit ordinal model; and combine the GMIFS procedure with the classical mixed-effects model for classifying disease status in disease progression along with time. We demonstrate the efficiency and accuracy of the proposed models in classification using a time-course microarray dataset collected from the Inflammation and the Host Response to Injury study. PMID:25720102

  1. Hedgehog black holes and the Polyakov loop at strong coupling

    NASA Astrophysics Data System (ADS)

    Headrick, Matthew

    2008-05-01

    In N=4 super-Yang-Mills theory at large N, large λ, and finite temperature, the value of the Wilson-Maldacena loop wrapping the Euclidean time circle (the Polyakov-Maldacena loop, or PML) is computed by the area of a certain minimal surface in the dual supergravity background. This prescription can be used to calculate the free energy as a function of the PML (averaged over the spatial coordinates), by introducing into the bulk action a Lagrange multiplier term that fixes the (average) area of the appropriate minimal surface. This term, which can also be viewed as a chemical potential for the PML, contributes to the bulk stress tensor like a string stretching from the horizon to the boundary (smeared over the angular directions). We find the corresponding “hedgehog” black hole solutions numerically, within an SO(6)-preserving ansatz, and derive part of the free energy diagram for the PML. As a warm-up problem, we also find exact solutions for hedgehog black holes in pure gravity, and derive the free energy and phase diagrams for that system.

  2. Studies on Polyakov and Nambu-Goto random surface path integrals on QCD(SU(∞)): Interquark potential and phenomenological scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2017-02-01

    We present new path integral studies on the Polyakov noncritical and Nambu-Goto critical string theories and their applications to QCD(SU(∞)) interquark potential. We also evaluate the long distance asymptotic behavior of the interquark potential on the Nambu-Goto string theory with an extrinsic term in Polyakov’s string at D →∞. We also propose an alternative and a new view to covariant Polyakov’s string path integral with a fourth-order two-dimensional quantum gravity, is an effective stringy description for QCD(SU(∞)) at the deep infrared region.

  3. The NSVZ scheme for N = 1 SQED with Nf flavors, regularized by the dimensional reduction, in the three-loop approximation

    NASA Astrophysics Data System (ADS)

    Aleshin, S. S.; Goriachuk, I. O.; Kataev, A. L.; Stepanyantz, K. V.

    2017-01-01

    At the three-loop level we analyze, how the NSVZ relation appears for N = 1 SQED regularized by the dimensional reduction. This is done by the method analogous to the one which was earlier used for the theories regularized by higher derivatives. Within the dimensional technique, the loop integrals cannot be written as integrals of double total derivatives. However, similar structures can be written in the considered approximation and are taken as a starting point. Then we demonstrate that, unlike the higher derivative regularization, the NSVZ relation is not valid for the renormalization group functions defined in terms of the bare coupling constant. However, for the renormalization group functions defined in terms of the renormalized coupling constant, it is possible to impose boundary conditions to the renormalization constants giving the NSVZ scheme in the three-loop order. They are similar to the all-loop ones defining the NSVZ scheme obtained with the higher derivative regularization, but are more complicated. The NSVZ schemes constructed with the dimensional reduction and with the higher derivative regularization are related by a finite renormalization in the considered approximation.

  4. Application of alternative synthetic kernel approximation to radiative transfer in regular and irregular two-dimensional media

    NASA Astrophysics Data System (ADS)

    Altaç, Zekeriya; Sert, Zerrin

    2017-01-01

    Alternative synthetic kernel (ASKN) approximation, just as the standard SKN method, is derived from the radiative integral transfer equations in full 3D generality. The direct and diffuse terms of thermal radiation appear explicitly in the radiative integral transfer equations as surface and volume integrals, respectively. In standard SKN method, the approximation is employed to the diffuse terms while direct terms are evaluated analytically. The alternative formulation differs from the standard one in that the direct radiation wall contributions are also approximated with the same spirit of the synthetic kernel approximation. This alternative formulation also yields a set of coupled partial differential-the ASKN-equations which could be solved using finite volume methods. This approximation is applied to radiative transfer calculations in regular and irregular two-dimensional absorbing, emitting and isotropically scattering media. Four benchmark problems-one rectangular and three irregular media-are considered, and the net radiative flux and/or incident energy solutions along the boundaries are compared with available exact, standard discrete ordinates S4 and S12, modified discrete ordinates S4, Monte Carlo and collocation spectral method to assess the accuracy of the method. The ASKN approximation yields ray effect free incident energy and radiative flux distributions, and low order ASKN solutions are generally better than those of the high order standard discrete ordinates method.

  5. Regularity criterion for solutions of the three-dimensional Cahn-Hilliard-Navier-Stokes equations and associated computations

    NASA Astrophysics Data System (ADS)

    Gibbon, John D.; Pal, Nairita; Gupta, Anupam; Pandit, Rahul

    2016-12-01

    We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984), 10.1007/BF01212349]. By taking an L∞ norm of the energy of the full binary system, designated as E∞, we have shown that ∫0tE∞(τ ) d τ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 1283 to 5123 collocation points and over the duration of our DNSs confirm that E∞ remains bounded as far as our computations allow.

  6. Estimating the smoothness of the regular component of the solution to a one-dimensional singularly perturbed convection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Andreev, V. B.

    2015-01-01

    The first boundary value problem for a one-dimensional singularly perturbed convection-diffusion equation with variable coefficients on a finite interval is considered. For the regular component of the solution, unimprovable a priori estimates in the Hölder norms are obtained. The estimates are unimprovable in the sense that they fail on any weakening of the estimating norm.

  7. The Polyakov loop correlator at NNLO and singlet and octet correlators

    SciTech Connect

    Ghiglieri, Jacopo

    2011-05-23

    We present the complete next-to-next-to-leading-order calculation of the correlation function of two Polyakov loops for temperatures smaller than the inverse distance between the loops and larger than the Coulomb potential. We discuss the relationship of this correlator with the singlet and octet potentials which we obtain in an Effective Field Theory framework based on finite-temperature potential Non-Relativistic QCD, showing that the Polyakov loop correlator can be re-expressed, at the leading order in a multipole expansion, as a sum of singlet and octet contributions. We also revisit the calculation of the expectation value of the Polyakov loop at next-to-next-to-leading order.

  8. Propagator, sewing rules, and vacuum amplitude for the Polyakov point particles with ghosts

    SciTech Connect

    Giannakis, I.; Ordonez, C.R.; Rubin, M.A.; Zucchini, R.

    1989-01-01

    The authors apply techniques developed for strings to the case of the spinless point particle. The Polyakov path integral with ghosts is used to obtain the propagator and one-loop vacuum amplitude. The propagator is shown to correspond to the Green's function for the BRST field theory in Siegel gauge. The reparametrization invariance of the Polyakov path integral is shown to lead automatically to the correct trace log result for the one-loop diagram, despite the fact that naive sewing of the ends of a propagator would give an incorrect answer. This type of failure of naive sewing is identical to that found in the string case. The present treatment provides, in the simplified context of the point particle, a pedagogical introduction to Polyakov path integral methods with and without ghosts.

  9. Selective spatial localization of the atom displacements in one-dimensional hybrid quasi-regular (Thue Morse and Rudin Shapiro)/periodic structures

    NASA Astrophysics Data System (ADS)

    Montalbán, A.; Velasco, V. R.; Tutor, J.; Fernández-Velicia, F. J.

    2007-06-01

    We have studied the vibrational frequencies and atom displacements of one-dimensional systems formed by combinations of Thue-Morse and Rudin-Shapiro quasi-regular stackings with periodic ones. The materials are described by nearest-neighbor force constants and the corresponding atom masses. These systems exhibit differences in the frequency spectrum as compared to the original simple quasi-regular generations and periodic structures. The most important feature is the presence of separate confinement of the atom displacements in one of the parts forming the total composite structure for different frequency ranges, thus acting as a kind of phononic cavity.

  10. Maximal Sobolev regularity for solutions of elliptic equations in infinite dimensional Banach spaces endowed with a weighted Gaussian measure

    NASA Astrophysics Data System (ADS)

    Cappa, G.; Ferrari, S.

    2016-12-01

    Let X be a separable Banach space endowed with a non-degenerate centered Gaussian measure μ. The associated Cameron-Martin space is denoted by H. Let ν =e-U μ, where U : X → R is a sufficiently regular convex and continuous function. In this paper we are interested in the W 2 , 2 regularity of the weak solutions of elliptic equations of the type

  11. Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.

    PubMed

    Andreev, Oleg

    2009-05-29

    We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range.

  12. Polyakov loop and heavy quark entropy in strong magnetic fields from holographic black hole engineering

    NASA Astrophysics Data System (ADS)

    Critelli, Renato; Rougemont, Romulo; Finazzo, Stefano I.; Noronha, Jorge

    2016-12-01

    We investigate the temperature and magnetic field dependence of the Polyakov loop and heavy quark entropy in a bottom-up Einstein-Maxwell-dilaton (EMD) holographic model for the strongly coupled quark-gluon plasma that quantitatively matches lattice data for the (2 +1 )-flavor QCD equation of state at finite magnetic field and physical quark masses. We compare the holographic EMD model results for the Polyakov loop at zero and nonzero magnetic fields and the heavy quark entropy at vanishing magnetic field with the latest lattice data available for these observables and find good agreement for temperatures T ≳150 MeV and magnetic fields e B ≲1 GeV2 . Predictions for the behavior of the heavy quark entropy at nonzero magnetic fields are made that could be readily tested on the lattice.

  13. Physical properties of Polyakov loop geometrical clusters in SU(2) gluodynamics

    NASA Astrophysics Data System (ADS)

    Ivanytskyi, A. I.; Bugaev, K. A.; Nikonov, E. G.; Ilgenfritz, E.-M.; Oliinychenko, D. R.; Sagun, V. V.; Mishustin, I. N.; Petrov, V. K.; Zinovjev, G. M.

    2017-04-01

    We apply the liquid droplet model to describe the clustering phenomenon in SU(2) gluodynamics, especially, in the vicinity of the deconfinement phase transition. In particular, we analyze the size distributions of clusters formed by the Polyakov loops of the same sign. Within such an approach this phase transition can be considered as the transition between two types of liquids where one of the liquids (the largest droplet of a certain Polyakov loop sign) experiences a condensation, while the other one (the next to largest droplet of opposite Polyakov loop sign) evaporates. The clusters of smaller sizes form two accompanying gases, and their size distributions are described by the liquid droplet parameterization. By fitting the lattice data we have extracted the value of Fisher exponent τ = 1.806 ± 0.008. Also we found that the temperature dependences of the surface tension of both gaseous clusters are entirely different below and above the phase transition and, hence, they can serve as an order parameter. The critical exponents of the surface tension coefficient in the vicinity of the phase transition are found. Our analysis shows that the temperature dependence of the surface tension coefficient above the critical temperature has a T2 behavior in one gas of clusters and T4 in the other one.

  14. SU(3) Polyakov linear-σ model in an external magnetic field

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser; Magdy, Niseem

    2014-07-01

    In the present work, we analyze the effects of an external magnetic field on the chiral critical temperature Tc of strongly interacting matter. In doing this, we can characterize the magnetic properties of the quantum chromodynamics (QCD) strongly interacting matter, the quark-gluon plasma (QGP). We investigate this in the framework of the SU(3) Polyakov linear sigma model (PLSM). To this end, we implement two approaches representing two systems, in which the Polyakov-loop potential added to PLSM is either renormalized or non-normalized. The effects of Landau quantization on the strongly interacting matter are conjectured to reduce the electromagnetic interactions between quarks. In this case, the color interactions will be dominant and increasing, which in turn can be achieved by increasing the Polyakov-loop fields. Obviously, each of them equips us with a different understanding about the critical temperature under the effect of an external magnetic field. In both systems, we obtain a paramagnetic response. In one system, we find that Tc increases with increasing magnetic field. In the other one, Tc significantly decreases with increasing magnetic field.

  15. Entropy-based viscous regularization for the multi-dimensional Euler equations in low-Mach and transonic flows

    SciTech Connect

    Marc O Delchini; Jean E. Ragusa; Ray A. Berry

    2015-07-01

    We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.

  16. Enforced neutrality and color-flavor unlocking in the three-flavor Polyakov-loop Nambu Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Abuki, H.; Ciminale, M.; Gatto, R.; Nardulli, G.; Ruggieri, M.

    2008-04-01

    We study how the charge neutrality affects the phase structure of the three-flavor Polyakov-loop Nambu Jona-Lasinio (PNJL) model. We point out that, within the conventional PNJL model at finite density, the color neutrality is missing because the Wilson line serves as an external colored field coupled to dynamical quarks. In this paper we heuristically assume that the model may still be applicable. To get color neutrality, one has then to allow nonvanishing color chemical potentials. We study how the quark matter phase diagram in (T,ms2/μ)-plane is affected by imposing neutrality and by including the Polyakov-loop dynamics. Although these two effects are correlated in a nonlinear way, the impact of the Polyakov loop turns out to be significant in the T direction, while imposing neutrality brings a remarkable effect in the ms2/μ direction. In particular, we find a novel unlocking transition, when the temperature is increased, even in the chiral SU(3) limit. We clarify how and why this is possible once the dynamics of the colored Polyakov loop is taken into account. Also we succeed in giving an analytic expression for Tc for the transition from two-flavor pairing (2SC) to unpaired quark matter in the presence of the Polyakov loop.

  17. Microwave transmission through one-dimensional hybrid quasi-regular (fibonacci and Thue-Morse)/periodic structures

    NASA Astrophysics Data System (ADS)

    Trabelsi, Youssef; Benali, Naim; Bouazzi, Yassine; Kanzari, Mounir

    2013-09-01

    The transmission properties of hybrid quasi-periodic photonic systems (HQPS) made by the combination of one-dimensional periodic photonic crystals (PPCs) and quasi-periodic photonic crystals (QPCs) were theoretically studied. The hybrid quasi-periodic photonic lattice based on the hetero-structures was built from the Fibonacci and Thue-Morse sequences. We addressed the microwave properties of waves through the one-dimensional symmetric Fibonacci, and Thue-Morse system i.e., a quasi-periodic structure was made up of two different dielectric materials (Rogers and air), in the quarter wavelength condition. It shows that controlling the Fibonacci parameters permits to obtain selective optical filters with the narrow passband and polychromatic stop band filters with varied properties which can be controlled as desired. From the results, we presented the self-similar features of the spectra, and we also presented the fractal process through a return map of the transmission coefficients. We extracted powerfully the band gaps of hybrid quasi-periodic multilayered structures, called "pseudo band gaps", often containing resonant states, which could be considered as a manifestation of numerous defects distributed along the structure. The results of transmittance spectra showed that the cutoff frequency could be manipulated through the thicknesses of the defects and the type of dielectric layers of the system. Taken together, the above two properties provide favorable conditions for the design of an all-microwave intermediate reflector.

  18. Scalar-pseudoscalar meson behavior and restoration of symmetries in SU(3) Polyakov-Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Costa, P.; Ruivo, M. C.; de Sousa, C. A.; Hansen, H.; Alberico, W. M.

    2009-06-01

    The modification of mesonic observables in a hot medium is analyzed as a tool to investigate the restoration of chiral and axial symmetries in the context of the Polyakov-loop extended Nambu-Jona-Lasinio model. The results of the extended model lead to the conclusion that the effects of the Polyakov loop are fundamental for reproducing lattice findings. In particular, the partial restoration of the chiral symmetry is faster in the Polyakov-Nambu-Jona-Lasinio model than in the Nambu-Jona-Lasinio one, and it is responsible for several effects: the meson-quark coupling constants show a remarkable difference in both models, there is a faster tendency to recover the Okubo-Zweig-Iizuka rule, and finally the topological susceptibility nicely reproduces the lattice results around T/Tc≈1.0.

  19. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  20. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    NASA Astrophysics Data System (ADS)

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  1. Finite temperature and the Polyakov loop in the covariant variational approach to Yang-Mills Theory

    NASA Astrophysics Data System (ADS)

    Quandt, Markus; Reinhardt, Hugo

    2017-03-01

    We extend the covariant variational approach for Yang-Mills theory in Landau gauge to non-zero temperatures. Numerical solutions for the thermal propagators are presented and compared to high-precision lattice data. To study the deconfinement phase transition, we adapt the formalism to background gauge and compute the effective action of the Polyakov loop for the colour groups SU(2) and SU(3). Using the zero-temperature propagators as input, all parameters are fixed at T = 0 and we find a clear signal for a deconfinement phase transition at finite temperatures, which is second order for SU(2) and first order for SU(3). The critical temperatures obtained are in reasonable agreement with lattice data.

  2. Covariant variational approach to Yang-Mills theory: Effective potential of the Polyakov loop

    NASA Astrophysics Data System (ADS)

    Quandt, M.; Reinhardt, H.

    2016-09-01

    We compute the effective action of the Polyakov loop in S U (2 ) and S U (3 ) Yang-Mills theory using a previously developed covariant variational approach. The formalism is extended to background gauge and it is shown how to relate the low-order Green's functions to the ones in Landau gauge studied earlier. The renormalization procedure is discussed. The self-consistent effective action is derived and evaluated using the numerical solution of the gap equation. We find a clear signal for a deconfinement phase transition at finite temperatures, which is second order for S U (2 ) and first order for S U (3 ). The critical temperatures obtained are in reasonable agreement with high-precision lattice data.

  3. Comparison between the continuum threshold and the Polyakov loop as deconfinement order parameters

    NASA Astrophysics Data System (ADS)

    Carlomagno, J. P.; Loewe, M.

    2017-02-01

    We study the relation between the continuum threshold s0 within finite energy sum rules and the trace of the Polyakov loop Φ in the framework of a nonlocal SU(2) chiral quark model, establishing a contact between both deconfinement order parameters at finite temperature T and chemical potential μ . In our analysis, we also include the order parameter for the chiral symmetry restoration, the chiral quark condensate. We found that s0 and Φ provide us with the same information for the deconfinement transition, both for the zero and finite chemical potential cases. At zero density, the critical temperatures for both quantities coincide exactly and, at finite μ both order parameters provide evidence for the appearance of a quarkyonic phase.

  4. Trajectory optimization using regularized variables

    NASA Technical Reports Server (NTRS)

    Lewallen, J. M.; Szebehely, V.; Tapley, B. D.

    1969-01-01

    Regularized equations for a particular optimal trajectory are compared with unregularized equations with respect to computational characteristics, using perturbation type numerical optimization. In the case of the three dimensional, low thrust, Earth-Jupiter rendezvous, the regularized equations yield a significant reduction in computer time.

  5. Two-dimensional simulation by regularization of free surface viscoplastic flows with Drucker-Prager yield stress and application to granular collapse

    NASA Astrophysics Data System (ADS)

    Lusso, Christelle; Ern, Alexandre; Bouchut, François; Mangeney, Anne; Farin, Maxime; Roche, Olivier

    2017-03-01

    This work is devoted to numerical modeling and simulation of granular flows relevant to geophysical flows such as avalanches and debris flows. We consider an incompressible viscoplastic fluid, described by a rheology with pressure-dependent yield stress, in a 2D setting with a free surface. We implement a regularization method to deal with the singularity of the rheological law, using a mixed finite element approximation of the momentum and incompressibility equations, and an arbitrary Lagrangian Eulerian (ALE) formulation for the displacement of the domain. The free surface is evolved by taking care of its deposition onto the bottom and of preventing it from folding over itself. Several tests are performed to assess the efficiency of our method. The first test is dedicated to verify its accuracy and cost on a one-dimensional simple shear plug flow. On this configuration we setup rules for the choice of the numerical parameters. The second test aims to compare the results of our numerical method to those predicted by an augmented Lagrangian formulation in the case of the collapse and spreading of a granular column over a horizontal rigid bed. Finally we show the reliability of our method by comparing numerical predictions to data from experiments of granular collapse of both trapezoidal and rectangular columns over horizontal rigid or erodible granular bed made of the same material. We compare the evolution of the free surface, the velocity profiles, and the static-flowing interface. The results show the ability of our method to deal numerically with the front behavior of granular collapses over an erodible bed.

  6. The consequences of SU (3) colorsingletness, Polyakov Loop and Z (3) symmetry on a quark-gluon gas

    NASA Astrophysics Data System (ADS)

    Aminul Islam, Chowdhury; Abir, Raktim; Mustafa, Munshi G.; Ray, Rajarshi; Ghosh, Sanjay K.

    2014-02-01

    Based on quantum statistical mechanics, we show that the SU(3) color singlet ensemble of a quark-gluon gas exhibits a Z(3) symmetry through the normalized character in fundamental representation and also becomes equivalent, within a stationary point approximation, to the ensemble given by Polyakov Loop. In addition, a Polyakov Loop gauge potential is obtained by considering spatial gluons along with the invariant Haar measure at each space point. The probability of the normalized character in SU(3) vis-a-vis a Polyakov Loop is found to be maximum at a particular value, exhibiting a strong color correlation. This clearly indicates a transition from a color correlated to an uncorrelated phase, or vice versa. When quarks are included in the gauge fields, a metastable state appears in the temperature range 145 ⩽ T(MeV) ⩽ 170 due to the explicit Z(3) symmetry breaking in the quark-gluon system. Beyond T ⩾ 170 MeV, the metastable state disappears and stable domains appear. At low temperatures, a dynamical recombination of ionized Z(3) color charges to a color singlet Z(3) confined phase is evident, along with a confining background that originates due to the circulation of two virtual spatial gluons, but with conjugate Z(3) phases in a closed loop. We also discuss other possible consequences of the center domains in the color deconfined phase at high temperatures. Communicated by Steffen Bass

  7. Nonlocal Polyakov-Nambu-Jona-Lasinio model with wave function renormalization at finite temperature and chemical potential

    SciTech Connect

    Contrera, G. A.; Orsaria, M.; Scoccola, N. N.

    2010-09-01

    We study the phase diagram of strongly interacting matter in the framework of a nonlocal SU(2) chiral quark model which includes wave function renormalization and coupling to the Polyakov loop. Both nonlocal interactions based on the frequently used exponential form factor, and on fits to the quark mass and renormalization functions obtained in lattice calculations are considered. Special attention is paid to the determination of the critical points, both in the chiral limit and at finite quark mass. In particular, we study the position of the critical end point as well as the value of the associated critical exponents for different model parametrizations.

  8. Cluster algorithm for two-dimensional U(1) lattice gauge theory

    NASA Astrophysics Data System (ADS)

    Sinclair, R.

    1992-03-01

    We use gauge fixing to rewrite the two-dimensional U(1) pure gauge model with Wilson action and periodic boundary conditions as a nonfrustrated XY model on a closed chain. The Wolff single-cluster algorithm is then applied, eliminating critical slowing down of topological modes and Polyakov loops.

  9. A comparative study on two different approaches of bulk viscosity in the Polyakov-Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Saha, Kinkar; Upadhaya, Sudipa; Ghosh, Sabyasachi

    2017-02-01

    We have gone through a comparative study on two different kinds of bulk viscosity expressions by using a common dynamical model. The Polyakov-Nambu-Jona-Lasinio (PNJL) model in the realm of mean-field approximation, including up to eight quark interactions for 2+1 flavor quark matter, is treated for this common dynamics. We have probed the numerical equivalence as well as discrepancy of two different expressions for bulk viscosity at vanishing quark chemical potential. Our estimation of bulk viscosity to entropy density ratio follows a decreasing trend with temperature, which is observed in most of the earlier investigations. We have also extended our estimation for finite values of quark chemical potential.

  10. Polyakov linear SU(3) σ model: Features of higher-order moments in a dense and thermal hadronic medium

    NASA Astrophysics Data System (ADS)

    Tawfik, A.; Magdy, N.; Diab, A.

    2014-05-01

    In order to characterize the higher-order moments of particle multiplicity, we implement the linear-sigma model with Polyakov-loop correction. We first studied the critical phenomena and estimated some thermodynamic quantities. Then, we compared all these results with first-principle lattice QCD calculations. Then, the extensive study of non-normalized four-moments is followed by investigating their thermal and density dependences. We repeat this for moments normalized to temperature and chemical potential. The fluctuations of the second-order moment are used to estimate the chiral phase transition. Then, we implement all these in mapping out the chiral phase transition, which is compared with the freeze-out parameters estimated from the lattice QCD simulations, and the thermal models are compared with the chiral phase diagram.

  11. Total variation regularization with bounded linear variations

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2016-09-01

    One of the most known techniques for signal denoising is based on total variation regularization (TV regularization). A better understanding of TV regularization is necessary to provide a stronger mathematical justification for using TV minimization in signal processing. In this work, we deal with an intermediate case between one- and two-dimensional cases; that is, a discrete function to be processed is two-dimensional radially symmetric piecewise constant. For this case, the exact solution to the problem can be obtained as follows: first, calculate the average values over rings of the noisy function; second, calculate the shift values and their directions using closed formulae depending on a regularization parameter and structure of rings. Despite the TV regularization is effective for noise removal; it often destroys fine details and thin structures of images. In order to overcome this drawback, we use the TV regularization for signal denoising subject to linear signal variations are bounded.

  12. Transport Code for Regular Triangular Geometry

    SciTech Connect

    1993-06-09

    DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.

  13. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  14. SU(3) Polyakov linear-sigma model: Conductivity and viscous properties of QCD matter in thermal medium

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser; Diab, Abdel Magied; Hussein, M. T.

    2016-11-01

    In mean field approximation, the grand canonical potential of SU(3) Polyakov linear-σ model (PLSM) is analyzed for chiral phase transition, σl and σs and for deconfinement order-parameters, ϕ and ϕ∗ of light- and strange-quarks, respectively. Various PLSM parameters are determined from the assumption of global minimization of the real part of the potential. Then, we have calculated the subtracted condensates (Δl,s). All these results are compared with recent lattice QCD simulations. Accordingly, essential PLSM parameters are determined. The modeling of the relaxation time is utilized in estimating the conductivity properties of the QCD matter in thermal medium, namely electric [σel(T)] and heat [κ(T)] conductivities. We found that the PLSM results on the electric conductivity and on the specific heat agree well with the available lattice QCD calculations. Also, we have calculated bulk and shear viscosities normalized to the thermal entropy, ξ/s and η/s, respectively, and compared them with recent lattice QCD. Predictions for (ξ/s)/(σel/T) and (η/s)/(σel/T) are introduced. We conclude that our results on various transport properties show some essential ingredients, that these properties likely come up with, in studying QCD matter in thermal and dense medium.

  15. Wavelet Characterizations of Multi-Directional Regularity

    NASA Astrophysics Data System (ADS)

    Slimane, Mourad Ben

    2011-05-01

    The study of d dimensional traces of functions of m several variables leads to directional behaviors. The purpose of this paper is two-fold. Firstly, we extend the notion of one direction pointwise Hölder regularity introduced by Jaffard to multi-directions. Secondly, we characterize multi-directional pointwise regularity by Triebel anisotropic wavelet coefficients (resp. leaders), and also by Calderón anisotropic continuous wavelet transform.

  16. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  17. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung F.

    1990-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  18. Partitioning of regular computation on multiprocessor systems

    SciTech Connect

    Lee, F. . Computer Systems Lab.)

    1990-07-01

    Problem partitioning of regular computation over two-dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  19. Regularization of B-Spline Objects.

    PubMed

    Xu, Guoliang; Bajaj, Chandrajit

    2011-01-01

    By a d-dimensional B-spline object (denoted as ), we mean a B-spline curve (d = 1), a B-spline surface (d = 2) or a B-spline volume (d = 3). By regularization of a B-spline object we mean the process of relocating the control points of such that they approximate an isometric map of its definition domain in certain directions and is shape preserving. In this paper we develop an efficient regularization method for , d = 1, 2, 3 based on solving weak form L(2)-gradient flows constructed from the minimization of certain regularizing energy functionals. These flows are integrated via the finite element method using B-spline basis functions. Our experimental results demonstrate that our new regularization method is very effective.

  20. Regular phantom black holes.

    PubMed

    Bronnikov, K A; Fabris, J C

    2006-06-30

    We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.

  1. Regularized Structural Equation Modeling.

    PubMed

    Jacobucci, Ross; Grimm, Kevin J; McArdle, John J

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.

  2. Manifold Regularized Reinforcement Learning.

    PubMed

    Li, Hongliang; Liu, Derong; Wang, Ding

    2017-01-27

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  3. Synchronization of Regular Automata

    NASA Astrophysics Data System (ADS)

    Caucal, Didier

    Functional graph grammars are finite devices which generate the class of regular automata. We recall the notion of synchronization by grammars, and for any given grammar we consider the class of languages recognized by automata generated by all its synchronized grammars. The synchronization is an automaton-related notion: all grammars generating the same automaton synchronize the same languages. When the synchronizing automaton is unambiguous, the class of its synchronized languages forms an effective boolean algebra lying between the classes of regular languages and unambiguous context-free languages. We additionally provide sufficient conditions for such classes to be closed under concatenation and its iteration.

  4. Regular transport dynamics produce chaotic travel times

    NASA Astrophysics Data System (ADS)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  5. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  6. Geometry of spinor regularization

    NASA Technical Reports Server (NTRS)

    Hestenes, D.; Lounesto, P.

    1983-01-01

    The Kustaanheimo theory of spinor regularization is given a new formulation in terms of geometric algebra. The Kustaanheimo-Stiefel matrix and its subsidiary condition are put in a spinor form directly related to the geometry of the orbit in physical space. A physically significant alternative to the KS subsidiary condition is discussed. Derivations are carried out without using coordinates.

  7. Perturbations in a regular bouncing universe

    SciTech Connect

    Battefeld, T.J.; Geshnizjani, G.

    2006-03-15

    We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.

  8. Krein regularization of QED

    SciTech Connect

    Forghan, B. Takook, M.V.; Zarei, A.

    2012-09-15

    In this paper, the electron self-energy, photon self-energy and vertex functions are explicitly calculated in Krein space quantization including quantum metric fluctuation. The results are automatically regularized or finite. The magnetic anomaly and Lamb shift are also calculated in the one loop approximation in this method. Finally, the obtained results are compared to conventional QED results. - Highlights: Black-Right-Pointing-Pointer Krein regularization yields finite values for photon and electron self-energies and vertex function. Black-Right-Pointing-Pointer The magnetic anomaly is calculated and is exactly the same as the conventional result. Black-Right-Pointing-Pointer The Lamb shift is calculated and is approximately the same as in Hilbert space.

  9. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  10. Strongly Regular Graphs,

    DTIC Science & Technology

    1973-10-01

    The theory of strongly regular graphs was introduced by Bose r7 1 in 1963, in connection with partial geometries and 2 class association schemes. One...non adjacent vertices is constant and equal to ~. We shall denote by ~(p) (reap.r(p)) the set of vertices adjacent (resp.non adjacent) to a vertex p...is the complement of .2’ if the set of vertices of ~ is the set of vertices of .2’ and if two vertices in .2’ are adjacent if and only if they were

  11. Regularized versus non-regularized statistical reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Denisova, N. V.

    2011-08-01

    An important feature of positron emission tomography (PET) and single photon emission computer tomography (SPECT) is the stochastic property of real clinical data. Statistical algorithms such as ordered subset-expectation maximization (OSEM) and maximum a posteriori (MAP) are a direct consequence of the stochastic nature of the data. The principal difference between these two algorithms is that OSEM is a non-regularized approach, while the MAP is a regularized algorithm. From the theoretical point of view, reconstruction problems belong to the class of ill-posed problems and should be considered using regularization. Regularization introduces an additional unknown regularization parameter into the reconstruction procedure as compared with non-regularized algorithms. However, a comparison of non-regularized OSEM and regularized MAP algorithms with fixed regularization parameters has shown very minor difference between reconstructions. This problem is analyzed in the present paper. To improve the reconstruction quality, a method of local regularization is proposed based on the spatially adaptive regularization parameter. The MAP algorithm with local regularization was tested in reconstruction of the Hoffman brain phantom.

  12. Flexible sparse regularization

    NASA Astrophysics Data System (ADS)

    Lorenz, Dirk A.; Resmerita, Elena

    2017-01-01

    The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.

  13. Mainstreaming the Regular Classroom Student.

    ERIC Educational Resources Information Center

    Kahn, Michael

    The paper presents activities, suggested by regular classroom teachers, to help prepare the regular classroom student for mainstreaming. The author points out that regular classroom children need a vehicle in which curiosity, concern, interest, fear, attitudes and feelings can be fully explored, where prejudices can be dispelled, and where the…

  14. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part I: Formulation and one-dimensional characterization

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    The motion of microstructural interfaces is important in modeling twinning and structural phase transformations. Continuum models fall into two classes: sharp-interface models, where interfaces are singular surfaces; and regularized-interface models, such as phase-field models, where interfaces are smeared out. The former are challenging for numerical solutions because the interfaces need to be explicitly tracked, but have the advantage that the kinetics of existing interfaces and the nucleation of new interfaces can be transparently and precisely prescribed. In contrast, phase-field models do not require explicit tracking of interfaces, thereby enabling relatively simple numerical calculations, but the specification of kinetics and nucleation is both restrictive and extremely opaque. This prevents straightforward calibration of phase-field models to experiment and/or molecular simulations, and breaks the multiscale hierarchy of passing information from atomic to continuum. Consequently, phase-field models cannot be confidently used in dynamic settings. This shortcoming of existing phase-field models motivates our work. We present the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients are a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation - through the source term of the conservation law - and kinetics - through a distinct interfacial velocity field. A formal limit of the kinetic driving force recovers the classical continuum sharp-interface driving force, providing confidence in both the re-parametrized energy and the evolution statement. We present some 1D calculations characterizing the formulation; in a

  15. Ensemble manifold regularization.

    PubMed

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  16. Chaos regularization of quantum tunneling rates.

    PubMed

    Pecora, Louis M; Lee, Hoshik; Wu, Dong-Ho; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-06-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  17. Regular black holes with flux tube core

    SciTech Connect

    Zaslavskii, Oleg B.

    2009-09-15

    We consider a class of black holes for which the area of the two-dimensional spatial cross section has a minimum on the horizon with respect to a quasiglobal (Krusckal-like) coordinate. If the horizon is regular, one can generate a tubelike counterpart of such a metric and smoothly glue it to a black hole region. The resulting composite space-time is globally regular, so all potential singularities under the horizon of the original metrics are removed. Such a space-time represents a black hole without an apparent horizon. It is essential that the matter should be nonvacuum in the outer region but vacuumlike in the inner one. As an example we consider the noninteracting mixture of vacuum fluid and matter with a linear equation of state and scalar phantom fields. This approach is extended to distorted metrics, with the requirement of spherical symmetry relaxed.

  18. Consistent regularization and renormalization in models with inhomogeneous phases

    NASA Astrophysics Data System (ADS)

    Adhikari, Prabal; Andersen, Jens O.

    2017-02-01

    In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.

  19. On regular rotating black holes

    NASA Astrophysics Data System (ADS)

    Torres, R.; Fayos, F.

    2017-01-01

    Different proposals for regular rotating black hole spacetimes have appeared recently in the literature. However, a rigorous analysis and proof of the regularity of this kind of spacetimes is still lacking. In this note we analyze rotating Kerr-like black hole spacetimes and find the necessary and sufficient conditions for the regularity of all their second order scalar invariants polynomial in the Riemann tensor. We also show that the regularity is linked to a violation of the weak energy conditions around the core of the rotating black hole.

  20. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    PubMed

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  1. Some results on the spectra of strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Vieira, Luís António de Almeida; Mano, Vasco Moço

    2016-06-01

    Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.

  2. Quaternion regularization and stabilization of perturbed central motion. II

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    1993-04-01

    Generalized regular quaternion equations for the three-dimensional two-body problem in terms of Kustaanheimo-Stiefel variables are obtained within the framework of the quaternion theory of regularizing and stabilizing transformations of the Newtonian equations for perturbed central motion. Regular quaternion equations for perturbed central motion of a material point in a central field with a certain potential Pi are also derived in oscillatory and normal forms. In addition, systems of perturbed central motion equations are obtained which include quaternion equations of perturbed orbit orientations in oscillatory or normal form, and a generalized Binet equation is derived. A comparative analysis of the equations is carried out.

  3. Linear regularity and [phi]-regularity of nonconvex sets

    NASA Astrophysics Data System (ADS)

    Ng, Kung Fu; Zang, Rui

    2007-04-01

    In this paper, we discuss some sufficient conditions for the linear regularity and bounded linear regularity (and their variations) of finitely many closed (not necessarily convex) sets in a normed vector space. The accompanying necessary conditions are also given in the setting of Asplund spaces.

  4. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part II: Two-dimensional characterization and boundary kinetics

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    A companion paper presented the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients were a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation through the source term of the conservation law and of kinetics through an interfacial velocity field. This model overcomes an important shortcoming of existing phase-field models, namely that the specification of kinetics and nucleation is both restrictive and extremely opaque. In this paper, we present a number of numerical calculations - in one and two dimensions - that characterize our formulation. These calculations illustrate (i) highly-sensitive rate-dependent nucleation; (ii) independent prescription of the forward and backward nucleation stresses without changing the energy landscape; (iii) stick-slip interface kinetics; (iii) the competition between nucleation and kinetics in determining the final microstructural state; (iv) the effect of anisotropic kinetics; and (v) the effect of non-monotone kinetics. These calculations demonstrate the ability of this formulation to precisely prescribe complex nucleation and kinetics in a simple and transparent manner. We also extend our conservation statement to describe the kinetics of the junction lines between microstructural interfaces and boundaries. This enables us to prescribe an additional kinetic relation for the boundary, and we examine the interplay between the bulk kinetics and the junction kinetics.

  5. Regularly timed events amid chaos

    NASA Astrophysics Data System (ADS)

    Blakely, Jonathan N.; Cooper, Roy M.; Corron, Ned J.

    2015-11-01

    We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events.

  6. SU(3) Polyakov linear-σ model in magnetic fields: Thermodynamics, higher-order moments, chiral phase structure, and meson masses

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser; Magdy, Niseem

    2015-01-01

    Effects of an external magnetic field on various properties of quantum chromodynamics (QCD) matter under extreme conditions of temperature and density (chemical potential) have been analyzed. To this end, we use SU(3) Polyakov linear-σ model and assume that the external magnetic field (e B ) adds some restrictions to the quarks' energy due to the existence of free charges in the plasma phase. In doing this, we apply the Landau theory of quantization, which assumes that the cyclotron orbits of charged particles in a magnetic field should be quantized. This requires an additional temperature to drive the system through the chiral phase transition. Accordingly, the dependence of the critical temperature of chiral and confinement phase transitions on the magnetic field is characterized. Based on this, we have studied the thermal evolution of thermodynamic quantities (energy density and trace anomaly) and the first four higher-order moment of particle multiplicity. Having all these calculations, we have studied the effects of the magnetic field on the chiral phase transition. We found that both critical temperature Tc and critical chemical potential increase with increasing magnetic field, e B . Last but not least, the magnetic effects of the thermal evolution of four scalar and four pseudoscalar meson states are studied. We concluded that the meson masses decrease as the temperature increases up to Tc. Then, the vacuum effect becomes dominant and rapidly increases with the temperature T . At low T , the scalar meson masses normalized to the lowest Matsubara frequency rapidly decrease as T increases. Then, starting from Tc, we find that the thermal dependence almost vanishes. Furthermore, the meson masses increase with increasing magnetic field. This gives a characteristic phase diagram of T vs external magnetic field e B . At high T , we find that the masses of almost all meson states become temperature independent. It is worthwhile to highlight that the various meson

  7. Rotating regular black hole solution

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    Based on the Newman-Janis algorithm, the Ayón-Beato-García spacetime metric [Phys. Rev. Lett. 80, 5056 (1998)] of the regular spherically symmetric, static, and charged black hole has been converted into rotational form. It is shown that the derived solution for rotating a regular black hole is regular and the critical value of the electric charge for which two horizons merge into one sufficiently decreases in the presence of the nonvanishing rotation parameter a of the black hole.

  8. NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION

    SciTech Connect

    CHARTRAND, RICK

    2007-01-16

    The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.

  9. Geometric continuum regularization of quantum field theory

    SciTech Connect

    Halpern, M.B. . Dept. of Physics)

    1989-11-08

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.

  10. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  11. Conformal regularization of Einstein's field equations

    NASA Astrophysics Data System (ADS)

    Röhr, Niklas; Uggla, Claes

    2005-09-01

    To study asymptotic structures, we regularize Einstein's field equations by means of conformal transformations. The conformal factor is chosen so that it carries a dimensional scale that captures crucial asymptotic features. By choosing a conformal orthonormal frame, we obtain a coupled system of differential equations for a set of dimensionless variables, associated with the conformal dimensionless metric, where the variables describe ratios with respect to the chosen asymptotic scale structure. As examples, we describe some explicit choices of conformal factors and coordinates appropriate for the situation of a timelike congruence approaching a singularity. One choice is shown to just slightly modify the so-called Hubble-normalized approach, and one leads to dimensionless first-order symmetric hyperbolic equations. We also discuss differences and similarities with other conformal approaches in the literature, as regards, e.g., isotropic singularities.

  12. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  13. Giftedness in the Regular Classroom.

    ERIC Educational Resources Information Center

    Green, Anne

    This paper presents a rationale for serving gifted students in the regular classroom and offers guidelines for recognizing students who are gifted in the seven types of intelligence proposed by Howard Gardner. Stressed is the importance of creating in the classroom a community of learners that allows all children to actively explore ideas and…

  14. 76 FR 3629 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... A. Approval of Minutes December 9, 2010. B. New Business Review of Insurance Premium Rates. FCSIC... From the Federal Register Online via the Government Publishing Office FARM CREDIT SYSTEM INSURANCE CORPORATION Farm Credit System Insurance Corporation Board Regular Meeting SUMMARY: Notice is hereby given...

  15. Regularization of Localized Degradation Processes

    DTIC Science & Technology

    1996-12-28

    order to assess the regularization properties of non-classical micropolar Cosserat continua which feature non-symmetric stress and strain tensors because...of the presence of couple-stresses and micro-curvatures. It was shown that micropolar media may only exhibit localized failure in the form of tensile

  16. Resource Guide for Regular Teachers.

    ERIC Educational Resources Information Center

    Kampert, George J.

    The resource guide for regular teachers provides policies and procedures of the Flour Bluff (Texas) school district regarding special education of handicapped students. Individual sections provide guidelines for the following areas: the referral process; individual assessment; participation on student evaluation and placement committee; special…

  17. Temporal regularity in speech perception: Is regularity beneficial or deleterious?

    PubMed

    Geiser, Eveline; Shattuck-Hufnagel, Stefanie

    2012-04-01

    Speech rhythm has been proposed to be of crucial importance for correct speech perception and language learning. This study investigated the influence of speech rhythm in second language processing. German pseudo-sentences were presented to participants in two conditions: 'naturally regular speech rhythm' and an 'emphasized regular rhythm'. Nine expert English speakers with 3.5±1.6 years of German training repeated each sentence after hearing it once over headphones. Responses were transcribed using the International Phonetic Alphabet and analyzed for the number of correct, false and missing consonants as well as for consonant additions. The over-all number of correct reproductions of consonants did not differ between the two experimental conditions. However, speech rhythmicization significantly affected the serial position curve of correctly reproduced syllables. The results of this pilot study are consistent with the view that speech rhythm is important for speech perception.

  18. On different facets of regularization theory.

    PubMed

    Chen, Zhe; Haykin, Simon

    2002-12-01

    This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the spatial (Fourier) domain. State-of-the-art research relevant to the regularization theory is reviewed, covering Occam's razor, minimum length description, Bayesian theory, pruning algorithms, informational (entropy) theory, statistical learning theory, and equivalent regularization. The universal principle of regularization in terms of Kolmogorov complexity is discussed. Finally, some prospective studies on regularization theory and beyond are suggested.

  19. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  20. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  1. Note on entanglement entropy and regularization in holographic interface theories

    NASA Astrophysics Data System (ADS)

    Gutperle, Michael; Trivella, Andrea

    2017-03-01

    We discuss the computation of holographic entanglement entropy for interface conformal field theories. The fact that globally well-defined Fefferman-Graham coordinates are difficult to construct makes the regularization of the holographic theory challenging. We introduce a simple new cutoff procedure, which we call "double cutoff" regularization. We test the new cutoff procedure by comparing the results for holographic entanglement entropies using other cutoff procedures and find agreement. We also study three dimensional conformal field theories with a two dimensional interface. In that case the dual bulk geometry is constructed using warped geometry with an AdS3 factor. We define an effective central charge to the interface through the Brown-Henneaux formula for the AdS3 factor. We investigate two concrete examples, showing that the same effective central charge appears in the computation of entanglement entropy and governs the conformal anomaly.

  2. Creating Two-Dimensional Nets of Three-Dimensional Shapes Using "Geometer's Sketchpad"

    ERIC Educational Resources Information Center

    Maida, Paula

    2005-01-01

    This article is about a computer lab project in which prospective teachers used Geometer's Sketchpad software to create two-dimensional nets for three-dimensional shapes. Since this software package does not contain ready-made tools for creating non-regular or regular polygons, the students used prior knowledge and geometric facts to create their…

  3. Spatially variant regularization of lateral displacement measurement using variance.

    PubMed

    Sumi, Chikayoshi; Itoh, Toshiki

    2009-05-01

    The purpose of this work is to confirm the effectiveness of our proposed spatially variant displacement component-dependent regularization for our previously developed ultrasonic two-dimensional (2D) displacement vector measurement methods, i.e., 2D cross-spectrum phase gradient method (CSPGM), 2D autocorrelation method (AM), and 2D Doppler method (DM). Generally, the measurement accuracy of lateral displacement spatially varies and the accuracy is lower than that of axial displacement that is accurate enough. This inaccurate measurement causes an instability in a 2D shear modulus reconstruction. Thus, the spatially variant lateral displacement regularization using the lateral displacement variance will be effective in obtaining an accurate lateral strain measurement and a stable shear modulus reconstruction than a conventional spatially uniform regularization. The effectiveness is verified through agar phantom experiments. The agar phantom [60mm (height) x 100 mm (lateral width) x 40 mm (elevational width)] that has, at a depth of 10mm, a circular cylindrical inclusion (dia.=10mm) of a higher shear modulus (2.95 and 1.43 x 10(6)N/m(2), i.e., relative shear modulus, 2.06) is compressed in the axial direction from the upper surface of the phantom using a commercial linear array type transducer that has a nominal frequency of 7.5-MHz. Because a contrast-to-noise ratio (CNR) expresses the detectability of the inhomogeneous region in the lateral strain image and further has almost the same sense as that of signal-to-noise ratio (SNR) for strain measurement, the obtained results show that the proposed spatially variant lateral displacement regularization yields a more accurate lateral strain measurement as well as a higher detectability in the lateral strain image (e.g., CNRs and SNRs for 2D CSPGM, 2.36 vs 2.27 and 1.74 vs 1.71, respectively). Furthermore, the spatially variant lateral displacement regularization yields a more stable and more accurate 2D shear modulus

  4. [Iterated Tikhonov Regularization for Spectral Recovery from Tristimulus].

    PubMed

    Xie, De-hong; Li, Rui; Wan, Xiao-xia; Liu, Qiang; Zhu, Wen-feng

    2016-01-01

    Reflective spectra in a multispectral image can objectively and originally represent color information due to their high dimensionality, illuminant independent and device independent. Aiming to the problem of loss of spectral information when the spectral data reconstructed from three-dimensional colorimetric data in the trichromatic camera-based spectral image acquisition system and its subsequent problem of loss of color information, this work proposes an iterated Tikhonov regularization to reconstruct the reflectance spectra. First of all, according to relationship between the colorimetric value and the reflective spectra in the colorimetric theory, this work constructs a spectral reconstruction equation which can reconstruct high dimensional spectral data from three dimensional colorimetric data acquired by the trichromatic camera. Then, the iterated Tikhonov regularization, inspired by the idea of the pseudo inverse Moore-Penrose, is used to cope with the linear ill-posed inverse problem during solving the equation of reconstructing reflectance spectra. Meanwhile, the work also uses the L-curve method to obtain an optimal regularized parameter of the iterated Tikhonov regularization by training a set of samples. Through these methods, the ill condition of the spectral reconstruction equation can be effectively controlled and improved, and subsequently loss of spectral information of the reconstructed spectral data can be reduced. The verification experiment is performed under another set of training samples. The experimental results show that the proposed method reconstructs the reflective spectra with less spectral information loss in the trichromatic camera-based spectral image acquisition system, which reflects in obvious decreases of spectral errors and colorimetric errors compared with the previous method.

  5. Enumeration of Extended m-Regular Linear Stacks.

    PubMed

    Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2016-12-01

    The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].

  6. Knowledge and regularity in planning

    NASA Technical Reports Server (NTRS)

    Allen, John A.; Langley, Pat; Matwin, Stan

    1992-01-01

    The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.

  7. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  8. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  9. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...

  10. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  11. Regularized degenerate multi-solitons

    NASA Astrophysics Data System (ADS)

    Correa, Francisco; Fring, Andreas

    2016-09-01

    We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.

  12. Natural frequency of regular basins

    NASA Astrophysics Data System (ADS)

    Tjandra, Sugih S.; Pudjaprasetya, S. R.

    2014-03-01

    Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.

  13. Nondissipative Velocity and Pressure Regularizations for the ICON Model

    NASA Astrophysics Data System (ADS)

    Restelli, M.; Giorgetta, M.; Hundertmark, T.; Korn, P.; Reich, S.

    2009-04-01

    formulation can be extended to the regularized systems retaining discrete conservation of mass and potential enstrophy. We also present some numerical results both in planar, doubly periodic geometry and in spherical geometry. These results show that our numerical formulation correctly approximates the behavior of the regularized models, and are a first step toward the use of the regularization idea within a complete, three-dimensional GCM. References [BR05] L. Bonaventura and T. Ringler. Analysis of discrete shallow-water models on geodesic Delaunay grids with C-type staggering. Mon. Wea. Rev., 133(8):2351-2373, August 2005. [HHPW08] M.W. Hecht, D.D. Holm, M.R. Petersen, and B.A. Wingate. Implementation of the LANS-α turbulence model in a primitive equation ocean model. J. Comp. Phys., 227(11):5691-5716, May 2008. [RWS07] S. Reich, N. Wood, and A. Staniforth. Semi-implicit methods, nonlinear balance, and regularized equations. Atmos. Sci. Lett., 8(1):1-6, 2007.

  14. Chemical Applications of Topology and Group Theory. 22. Lowest Degree Chirality Polynomials for Regular Polyhedra.

    DTIC Science & Technology

    1986-08-18

    tetrahedron, octahedron, cube, icosahedron, and dodecahedron are 1, 15, 840, 3991680, and approximately 2.0274183 x 1016, respectively. The regular... dodecahedron is excluded from detailed consideration in this paper not only by its unmanageably large chiral dimensionality but by the intractably large...size and complexity of the required character table for the symmetric group P2 0 of order 20M 2.432902 x 1018. Study of the regular dodecahedron is

  15. Note on Prodi-Serrin-Ladyzhenskaya type regularity criteria for the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Tran, Chuong V.; Yu, Xinwei

    2017-01-01

    In this article, we prove new regularity criteria of the Prodi-Serrin-Ladyzhenskaya type for the Cauchy problem of the three-dimensional incompressible Navier-Stokes equations. Our results improve the classical Lr(0, T; Ls) regularity criteria for both velocity and pressure by factors of certain negative powers of the scaling invariant norms ↑" separators=" u ↑ L 3 and ↑" separators=" u ↑ H ˙ 1 / 2 .

  16. Manifestly scale-invariant regularization and quantum effective operators

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2016-05-01

    Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.

  17. Remarks on the improved regularity criterion for the 2D Euler-Boussinesq equations with supercritical dissipation

    NASA Astrophysics Data System (ADS)

    Ye, Zhuan

    2016-12-01

    This paper is devoted to the investigation of the regularity criterion to the two-dimensional (2D) Euler-Boussinesq equations with supercritical dissipation. By making use of the Littlewood-Paley technique, we provide an improved regularity criterion involving the temperature at the scaling invariant level, which improves the previous results.

  18. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  19. Monopole mass in the three-dimensional Georgi-Glashow model

    NASA Astrophysics Data System (ADS)

    Davis, A. C.; Hart, A.; Kibble, T. W.; Rajantie, A.

    2002-06-01

    We study the three-dimensional Georgi-Glashow model to demonstrate how magnetic monopoles can be studied fully nonperturbatively in lattice Monte Carlo simulations, without any assumptions about the smoothness of the field configurations. We examine the apparent contradiction between the conjectured analytic connection of the “broken” and “symmetric” phases, and the interpretation of the mass (i.e., the free energy) of the fully quantized ’t Hooft Polyakov monopole as an order parameter to distinguish the phases. We use Monte Carlo simulations to measure the monopole free energy and its first derivative with respect to the scalar mass. On small volumes we compare this to semiclassical predictions for the monopole. On large volumes we show that the free energy is screened to zero, signaling the formation of a confining monopole condensate. This screening does not allow the monopole mass to be interpreted as an order parameter, resolving the paradox.

  20. Testing times: regularities in the historical sciences.

    PubMed

    Jeffares, Ben

    2008-12-01

    The historical sciences, such as geology, evolutionary biology, and archaeology, appear to have no means to test hypotheses. However, on closer examination, reasoning in the historical sciences relies upon regularities, regularities that can be tested. I outline the role of regularities in the historical sciences, and in the process, blur the distinction between the historical sciences and the experimental sciences: all sciences deploy theories about the world in their investigations.

  1. Regularity effect in prospective memory during aging

    PubMed Central

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 16 intermediate adults (40–55), and 25 older adults (65–80). The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities). We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding), short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young), but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical implications of regularity

  2. Computationally efficient error estimate for evaluation of regularization in photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Bhatt, Manish; Acharya, Atithi; Yalavarthy, Phaneendra K.

    2016-10-01

    The model-based image reconstruction techniques for photoacoustic (PA) tomography require an explicit regularization. An error estimate (η2) minimization-based approach was proposed and developed for the determination of a regularization parameter for PA imaging. The regularization was used within Lanczos bidiagonalization framework, which provides the advantage of dimensionality reduction for a large system of equations. It was shown that the proposed method is computationally faster than the state-of-the-art techniques and provides similar performance in terms of quantitative accuracy in reconstructed images. It was also shown that the error estimate (η2) can also be utilized in determining a suitable regularization parameter for other popular techniques such as Tikhonov, exponential, and nonsmooth (ℓ1 and total variation norm based) regularization methods.

  3. Digital image correlation involves an inverse problem: A regularization scheme based on subset size constraint

    NASA Astrophysics Data System (ADS)

    Zhan, Qin; Yuan, Yuan; Fan, Xiangtao; Huang, Jianyong; Xiong, Chunyang; Yuan, Fan

    2016-06-01

    Digital image correlation (DIC) is essentially implicated in a class of inverse problem. Here, a regularization scheme is developed for the subset-based DIC technique to effectively inhibit potential ill-posedness that likely arises in actual deformation calculations and hence enhance numerical stability, accuracy and precision of correlation measurement. With the aid of a parameterized two-dimensional Butterworth window, a regularized subpixel registration strategy is established, in which the amount of speckle information introduced to correlation calculations may be weighted through equivalent subset size constraint. The optimal regularization parameter associated with each individual sampling point is determined in a self-adaptive way by numerically investigating the curve of 2-norm condition number of coefficient matrix versus the corresponding equivalent subset size, based on which the regularized solution can eventually be obtained. Numerical results deriving from both synthetic speckle images and actual experimental images demonstrate the feasibility and effectiveness of the set of newly-proposed regularized DIC algorithms.

  4. Elementary Particle Spectroscopy in Regular Solid Rewrite

    NASA Astrophysics Data System (ADS)

    Trell, Erik

    2008-10-01

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it "is the likely keystone of a fundamental computational foundation" also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)×O(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  5. Elementary Particle Spectroscopy in Regular Solid Rewrite

    SciTech Connect

    Trell, Erik

    2008-10-17

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  6. Higher spin black holes in three dimensions: Remarks on asymptotics and regularity

    NASA Astrophysics Data System (ADS)

    Bañados, Máximo; Canto, Rodrigo; Theisen, Stefan

    2016-07-01

    In the context of (2 +1 )-dimensional S L (N ,R )×S L (N ,R ) Chern-Simons theory we explore issues related to regularity and asymptotics on the solid torus, for stationary and circularly symmetric solutions. We display and solve all necessary conditions to ensure a regular metric and metriclike higher spin fields. We prove that holonomy conditions are necessary but not sufficient conditions to ensure regularity, and that Hawking conditions do not necessarily follow from them. Finally we give a general proof that once the chemical potentials are turned on—as demanded by regularity—the asymptotics cannot be that of Brown-Henneaux.

  7. The experimental localization of Aubry-Mather sets using regularization techniques inspired by viscosity theory.

    PubMed

    Guzzo, Massimiliano; Bernardi, Olga; Cardin, Franco

    2007-09-01

    We provide a new method for the localization of Aubry-Mather sets in quasi-integrable two-dimensional twist maps. Inspired by viscosity theories, we introduce regularization techniques based on the new concept of "relative viscosity and friction," which allows one to obtain regularized parametrizations of invariant sets with irrational rotation number. Such regularized parametrizations allow one to compute a curve in the phase-space that passes near the Aubry-Mather set, and an invariant measure whose density allows one to locate the gaps on the curve. We show applications to the "golden" cantorus of the standard map as well as to a more general case.

  8. Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.

    PubMed

    Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2013-01-01

    A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.

  9. Harmonic R matrices for scattering amplitudes and spectral regularization.

    PubMed

    Ferro, Livia; Łukowski, Tomasz; Meneghelli, Carlo; Plefka, Jan; Staudacher, Matthias

    2013-03-22

    Planar N = 4 supersymmetric Yang-Mills theory appears to be integrable. While this allows one to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R matrix of the integrable N = 4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of nonintegrable four-dimensional field theories.

  10. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-04-01

    Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.

  11. Numerical Regularization of Ill-Posed Problems.

    DTIC Science & Technology

    1980-07-09

    Unione Matematica Italiana. 4. The parameter choice problem in linear regularization: a mathematical introduction, in "Ill-Posed Problems: Theory and...vector b which is generally unavailable (see [21], [22]). Kdckler [33] has shon however that in the case of Tikhonov regularization for matrices it may

  12. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  13. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...

  14. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...

  15. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...

  16. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...

  17. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Regular membership. 725.3 Section 725.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person...

  18. Regularization techniques in realistic Laplacian computation.

    PubMed

    Bortel, Radoslav; Sovka, Pavel

    2007-11-01

    This paper explores regularization options for the ill-posed spline coefficient equations in the realistic Laplacian computation. We investigate the use of the Tikhonov regularization, truncated singular value decomposition, and the so-called lambda-correction with the regularization parameter chosen by the L-curve, generalized cross-validation, quasi-optimality, and the discrepancy principle criteria. The provided range of regularization techniques is much wider than in the previous works. The improvement of the realistic Laplacian is investigated by simulations on the three-shell spherical head model. The conclusion is that the best performance is provided by the combination of the Tikhonov regularization and the generalized cross-validation criterion-a combination that has never been suggested for this task before.

  19. A linear functional strategy for regularized ranking.

    PubMed

    Kriukova, Galyna; Panasiuk, Oleksandra; Pereverzyev, Sergei V; Tkachenko, Pavlo

    2016-01-01

    Regularization schemes are frequently used for performing ranking tasks. This topic has been intensively studied in recent years. However, to be effective a regularization scheme should be equipped with a suitable strategy for choosing a regularization parameter. In the present study we discuss an approach, which is based on the idea of a linear combination of regularized rankers corresponding to different values of the regularization parameter. The coefficients of the linear combination are estimated by means of the so-called linear functional strategy. We provide a theoretical justification of the proposed approach and illustrate them by numerical experiments. Some of them are related with ranking the risk of nocturnal hypoglycemia of diabetes patients.

  20. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    SciTech Connect

    Gao, J. M.; Liu, Y.; Li, W.; Lu, J.; Dong, Y. B.; Xia, Z. W.; Yi, P.; Yang, Q. W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.

  1. Manifold regularized non-negative matrix factorization with label information

    NASA Astrophysics Data System (ADS)

    Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin

    2016-03-01

    Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.

  2. Interior Regularity Estimates in High Conductivity Homogenization and Application

    NASA Astrophysics Data System (ADS)

    Briane, Marc; Capdeboscq, Yves; Nguyen, Luc

    2013-01-01

    In this paper, uniform pointwise regularity estimates for the solutions of conductivity equations are obtained in a unit conductivity medium reinforced by an ɛ-periodic lattice of highly conducting thin rods. The estimates are derived only at a distance ɛ 1+ τ (for some τ > 0) away from the fibres. This distance constraint is rather sharp since the gradients of the solutions are shown to be unbounded locally in L p as soon as p > 2. One key ingredient is the derivation in dimension two of regularity estimates to the solutions of the equations deduced from a Fourier series expansion with respect to the fibres' direction, and weighted by the high-contrast conductivity. The dependence on powers of ɛ of these two-dimensional estimates is shown to be sharp. The initial motivation for this work comes from imaging, and enhanced resolution phenomena observed experimentally in the presence of micro-structures (L erosey et al., Science 315:1120-1124, 2007). We use these regularity estimates to characterize the signature of low volume fraction heterogeneities in the fibred reinforced medium, assuming that the heterogeneities stay at a distance ɛ 1+ τ away from the fibres.

  3. Regular black holes and noncommutative geometry inspired fuzzy sources

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinpei

    2016-05-01

    We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the space-time dimensions, and the existence of a void in the vicinity of the center of the space-time is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it.

  4. Anomalies, Hawking radiations, and regularity in rotating black holes

    SciTech Connect

    Iso, Satoshi; Umetsu, Hiroshi; Wilczek, Frank

    2006-08-15

    This is an extended version of our previous letter [S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett. 96, 151302 (2006).]. In this paper we consider rotating black holes and show that the flux of Hawking radiation can be determined by anomaly cancellation conditions and regularity requirement at the horizon. By using a dimensional reduction technique, each partial wave of quantum fields in a d=4 rotating black hole background can be interpreted as a (1+1)-dimensional charged field with a charge proportional to the azimuthal angular momentum m. From this and the analysis [S. P. Robinson and F. Wilczek, Phys. Rev. Lett. 95, 011303 (2005), S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett. 96, 151302 (2006).] on Hawking radiation from charged black holes, we show that the total flux of Hawking radiation from rotating black holes can be universally determined in terms of the values of anomalies at the horizon by demanding gauge invariance and general coordinate covariance at the quantum level. We also clarify our choice of boundary conditions and show that our results are consistent with the effective action approach where regularity at the future horizon and vanishing of ingoing modes at r={infinity} are imposed (i.e. Unruh vacuum)

  5. Regularization of chaos by noise in electrically driven nanowire systems

    NASA Astrophysics Data System (ADS)

    Hessari, Peyman; Do, Younghae; Lai, Ying-Cheng; Chae, Junseok; Park, Cheol Woo; Lee, GyuWon

    2014-04-01

    The electrically driven nanowire systems are of great importance to nanoscience and engineering. Due to strong nonlinearity, chaos can arise, but in many applications it is desirable to suppress chaos. The intrinsically high-dimensional nature of the system prevents application of the conventional method of controlling chaos. Remarkably, we find that the phenomenon of coherence resonance, which has been well documented but for low-dimensional chaotic systems, can occur in the nanowire system that mathematically is described by two coupled nonlinear partial differential equations, subject to periodic driving and noise. Especially, we find that, when the nanowire is in either the weakly chaotic or the extensively chaotic regime, an optimal level of noise can significantly enhance the regularity of the oscillations. This result is robust because it holds regardless of whether noise is white or colored, and of whether the stochastic drivings in the two independent directions transverse to the nanowire are correlated or independent of each other. Noise can thus regularize chaotic oscillations through the mechanism of coherence resonance in the nanowire system. More generally, we posit that noise can provide a practical way to harness chaos in nanoscale systems.

  6. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  7. Functional MRI using regularized parallel imaging acquisition.

    PubMed

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M; Belliveau, John W; Wald, Lawrence L; Kwong, Kenneth K

    2005-08-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions.

  8. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  9. Completeness and regularity of generalized fuzzy graphs.

    PubMed

    Samanta, Sovan; Sarkar, Biswajit; Shin, Dongmin; Pal, Madhumangal

    2016-01-01

    Fuzzy graphs are the backbone of many real systems like networks, image, scheduling, etc. But, due to some restriction on edges, fuzzy graphs are limited to represent for some systems. Generalized fuzzy graphs are appropriate to avoid such restrictions. In this study generalized fuzzy graphs are introduced. In this study, matrix representation of generalized fuzzy graphs is described. Completeness and regularity are two important parameters of graph theory. Here, regular and complete generalized fuzzy graphs are introduced. Some properties of them are discussed. After that, effective regular graphs are exemplified.

  10. Lagrangian averaging, nonlinear waves, and shock regularization

    NASA Astrophysics Data System (ADS)

    Bhat, Harish S.

    In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity

  11. Analysis of regularized inversion of data corrupted by white Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-04-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is \\begin{eqnarray*} m(x) = Au(x) + \\delta \\varepsilon (x), \\end{eqnarray*} where δ > 0 is the noise magnitude. If ɛ was an L2-function, Tikhonov regularization gives an estimate \\begin{eqnarray*} T_\\alpha (m) = \\mathop {{arg\\, min}}_{u\\in H^r} \\big \\lbrace \\Vert A u-m\\Vert _{L^2}^2+ \\alpha \\Vert u\\Vert _{H^r}^2 \\big \\rbrace \\end{eqnarray*} for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm \\Vert u\\Vert _{H^r} covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L2, but do belong to Hs with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed.

  12. Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance

    NASA Astrophysics Data System (ADS)

    Rosestolato, M.; Święch, A.

    2017-02-01

    We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.

  13. [Serum ferritin in donors with regular plateletpheresis].

    PubMed

    Ma, Chun-Hui; Guo, Ru-Hua; Wu, Wei-Jian; Yan, Jun-Xiong; Yu, Jin-Lin; Zhu, Ye-Hua; He, Qi-Tong; Luo, Yi-Hong; Huang, Lu; Ye, Rui-Yun

    2011-04-01

    This study was aimed to evaluate the impact of regular donating platelets on serum ferritin (SF) of donors. A total of 93 male blood donors including 24 initial plateletpheresis donors and 69 regular plateletpheresis donors were selected randomly. Their SF level was measured by ELISA. The results showed that the SF level of initial plateletpheresis donors and regular plateletpheresis donors were 91.08 ± 23.38 µg/L and 57.16 ± 35.48 µg/L respectively, and all were in normal levels, but there was significant difference between the 2 groups (p < 0.05). The SF level decreased when the donation frequency increased, there were no significant differences between the groups with different donation frequency. Correlation with lifetime donations of platelets was not found. It is concluded that regular plateletpheresis donors may have lower SF level.

  14. Epigenetic adaptation to regular exercise in humans.

    PubMed

    Ling, Charlotte; Rönn, Tina

    2014-07-01

    Regular exercise has numerous health benefits, for example, it reduces the risk of cardiovascular disease and cancer. It has also been shown that the risk of type 2 diabetes can be halved in high-risk groups through nonpharmacological lifestyle interventions involving exercise and diet. Nevertheless, the number of people living a sedentary life is dramatically increasing worldwide. Researchers have searched for molecular mechanisms explaining the health benefits of regular exercise for decades and it is well established that exercise alters the gene expression pattern in multiple tissues. However, until recently it was unknown that regular exercise can modify the genome-wide DNA methylation pattern in humans. This review will focus on recent progress in the field of regular exercise and epigenetics.

  15. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  16. Probabilistic regularization in inverse optical imaging.

    PubMed

    De Micheli, E; Viano, G A

    2000-11-01

    The problem of object restoration in the case of spatially incoherent illumination is considered. A regularized solution to the inverse problem is obtained through a probabilistic approach, and a numerical algorithm based on the statistical analysis of the noisy data is presented. Particular emphasis is placed on the question of the positivity constraint, which is incorporated into the probabilistically regularized solution by means of a quadratic programming technique. Numerical examples illustrating the main steps of the algorithm are also given.

  17. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  18. Usual Source of Care in Preventive Service Use: A Regular Doctor versus a Regular Site

    PubMed Central

    Xu, K Tom

    2002-01-01

    Objective To compare the effects of having a regular doctor and having a regular site on five preventive services, controlling for the endogeneity of having a usual source of care. Data Source The Medical Expenditure Panel Survey 1996 conducted by the Agency for Healthcare Research and Quality and the National Center for Health Statistics. Study Design Mammograms, pap smears, blood pressure checkups, cholesterol level checkups, and flu shots were examined. A modified behavioral model framework was presented, which controlled for the endogeneity of having a usual source of care. Based on this framework, a two-equation empirical model was established to predict the probabilities of having a regular doctor and having a regular site, and use of each type of preventive service. Principal Findings Having a regular doctor was found to have a greater impact than having a regular site on discretional preventive services, such as blood pressure and cholesterol level checkups. No statistically significant differences were found between the effects a having a regular doctor and having a regular site on the use of flu shots, pap smears, and mammograms. Among the five preventive services, having a usual source of care had the greatest impact on cholesterol level checkups and pap smears. Conclusions Promoting a stable physician–patient relationship can improve patients’ timely receipt of clinical prevention. For certain preventive services, having a regular doctor is more effective than having a regular site. PMID:12546284

  19. Spectral analysis of two-dimensional Bose-Hubbard models

    NASA Astrophysics Data System (ADS)

    Fischer, David; Hoffmann, Darius; Wimberger, Sandro

    2016-04-01

    One-dimensional Bose-Hubbard models are well known to obey a transition from regular to quantum-chaotic spectral statistics. We are extending this concept to relatively simple two-dimensional many-body models. Also in two dimensions a transition from regular to chaotic spectral statistics is found and discussed. In particular, we analyze the dependence of the spectral properties on the bond number of the two-dimensional lattices and the applied boundary conditions. For maximal connectivity, the systems behave most regularly in agreement with the applicability of mean-field approaches in the limit of many nearest-neighbor couplings at each site.

  20. On minimal energy dipole moment distributions in regular polygonal agglomerates

    NASA Astrophysics Data System (ADS)

    Rosa, Adriano Possebon; Cunha, Francisco Ricardo; Ceniceros, Hector Daniel

    2017-01-01

    Static, regular polygonal and close-packed clusters of spherical magnetic particles and their energy-minimizing magnetic moments are investigated in a two-dimensional setting. This study focuses on a simple particle system which is solely described by the dipole-dipole interaction energy, both without and in the presence of an in-plane magnetic field. For a regular polygonal structure of n sides with n ≥ 3 , and in the absence of an external field, it is proved rigorously that the magnetic moments given by the roots of unity, i.e. tangential to the polygon, are a minimizer of the dipole-dipole interaction energy. Also, for zero external field, new multiple local minima are discovered for the regular polygonal agglomerates. The number of found local extrema is proportional to [ n / 2 ] and these critical points are characterized by the presence of a pair of magnetic moments with a large deviation from the tangential configuration and whose particles are at least three diameters apart. The changes induced by an in-plane external magnetic field on the minimal energy, tangential configurations are investigated numerically. The two critical fields, which correspond to a crossover with the linear chain minimal energy and with the break-up of the agglomerate, respectively are examined in detail. In particular, the numerical results are compared directly with the asymptotic formulas of Danilov et al. (2012) [23] and a remarkable agreement is found even for moderate to large fields. Finally, three examples of close-packed structures are investigated: a triangle, a centered hexagon, and a 19-particle close packed cluster. The numerical study reveals novel, illuminating characteristics of these compact clusters often seen in ferrofluids. The centered hexagon is energetically favorable to the regular hexagon and the minimal energy for the larger 19-particle cluster is even lower than that of the close packed hexagon. In addition, this larger close packed agglomerate has two

  1. Shadow of rotating regular black holes

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.

    2016-05-01

    We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.

  2. Nonlinear electrodynamics and regular black holes

    NASA Astrophysics Data System (ADS)

    Sajadi, S. N.; Riazi, N.

    2017-03-01

    In this work, an exact regular black hole solution in General Relativity is presented. The source is a nonlinear electromagnetic field with the algebraic structure T00=T11 for the energy-momentum tensor, partially satisfying the weak energy condition but not the strong energy condition. In the weak field limit, the EM field behaves like the Maxwell field. The solution corresponds to a charged black hole with q≤0.77 m. The metric, the curvature invariants, and the electric field are regular everywhere. The BH is stable against small perturbations of spacetime and using the Weinhold metric, geometrothermodynamical stability has been investigated. Finally we investigate the idea that the observable universe lives inside a regular black hole. We argue that this picture might provide a viable description of universe.

  3. Regularization and the potential of effective field theory in nucleon-nucleon scattering

    SciTech Connect

    Phillips, D.R.

    1998-04-01

    This paper examines the role that regularization plays in the definition of the potential used in effective field theory (EFT) treatments of the nucleon-nucleon interaction. The author considers N N scattering in S-wave channels at momenta well below the pion mass. In these channels (quasi-)bound states are present at energies well below the scale m{sub {pi}}{sup 2}/M expected from naturalness arguments. He asks whether, in the presence of such a shallow bound state, there is a regularization scheme which leads to an EFT potential that is both useful and systematic. In general, if a low-lying bound state is present then cutoff regularization leads to an EFT potential which is useful but not systematic, and dimensional regularization with minimal subtraction leads to one which is systematic but not useful. The recently-proposed technique of dimensional regularization with power-law divergence subtraction allows the definition of an EFT potential which is both useful and systematic.

  4. Regular homotopy for immersions of graphs into surfaces

    NASA Astrophysics Data System (ADS)

    Permyakov, D. A.

    2016-06-01

    We study invariants of regular immersions of graphs into surfaces up to regular homotopy. The concept of the winding number is used to introduce a new simple combinatorial invariant of regular homotopy. Bibliography: 20 titles.

  5. The effect of regularization on the reconstruction of ACAR data

    NASA Astrophysics Data System (ADS)

    Weber, J. A.; Ceeh, H.; Hugenschmidt, C.; Leitner, M.; Böni, P.

    2014-04-01

    The Fermi surface, i.e. the two-dimensional surface separating occupied and unoccupied states in k-space, is the defining property of a metal. Full information about its shape is mandatory for identifying nesting vectors or for validating band structure calculations. With the angular correlation of positron-electron annihilation radiation (ACAR) it is easy to get projections of the Fermi surface. Nevertheless it is claimed to be inexact compared to more common methods like the determination based on quantum oscillations or angle-resolved photoemission spectroscopy. In this article we will present a method for reconstructing the Fermi surface from projections with statistically correct data treatment which is able to increase accuracy by introducing different types of regularization.

  6. Analytic regularization in Soft-Collinear Effective Theory

    NASA Astrophysics Data System (ADS)

    Becher, Thomas; Bell, Guido

    2012-06-01

    In high-energy processes which are sensitive to small transverse momenta, individual contributions from collinear and soft momentum regions are not separately well-defined in dimensional regularization. A simple possibility to solve this problem is to introduce additional analytic regulators. We point out that in massless theories the unregularized singularities only appear in real-emission diagrams and that the additional regulators can be introduced in such a way that gauge invariance and the factorized eikonal structure of soft and collinear emissions is maintained. This simplifies factorization proofs and implies, at least in the massless case, that the structure of Soft-Collinear Effective Theory remains completely unchanged by the presence of the additional regulators. Our formalism also provides a simple operator definition of transverse parton distribution functions.

  7. Regularity for steady periodic capillary water waves with vorticity.

    PubMed

    Henry, David

    2012-04-13

    In the following, we prove new regularity results for two-dimensional steady periodic capillary water waves with vorticity, in the absence of stagnation points. Firstly, we prove that if the vorticity function has a Hölder-continuous first derivative, then the free surface is a smooth curve and the streamlines beneath the surface will be real analytic. Furthermore, once we assume that the vorticity function is real analytic, it will follow that the wave surface profile is itself also analytic. A particular case of this result includes irrotational fluid flow where the vorticity is zero. The property of the streamlines being analytic allows us to gain physical insight into small-amplitude waves by justifying a power-series approach.

  8. Baseline Regularization for Computational Drug Repositioning with Longitudinal Observational Data

    PubMed Central

    Kuang, Zhaobin; Thomson, James; Caldwell, Michael; Peissig, Peggy; Stewart, Ron; Page, David

    2016-01-01

    Computational Drug Repositioning (CDR) is the knowledge discovery process of finding new indications for existing drugs leveraging heterogeneous drug-related data. Longitudinal observational data such as Electronic Health Records (EHRs) have become an emerging data source for CDR. To address the high-dimensional, irregular, subject and time-heterogeneous nature of EHRs, we propose Baseline Regularization (BR) and a variant that extend the one-way fixed effect model, which is a standard approach to analyze small-scale longitudinal data. For evaluation, we use the proposed methods to search for drugs that can lower Fasting Blood Glucose (FBG) level in the Marshfield Clinic EHR. Experimental results suggest that the proposed methods are capable of rediscovering drugs that can lower FBG level as well as identifying some potential blood sugar lowering drugs in the literature.

  9. Generalised hyperbolicity in spacetimes with Lipschitz regularity

    NASA Astrophysics Data System (ADS)

    Sanchez Sanchez, Yafet; Vickers, James A.

    2017-02-01

    In this paper we obtain general conditions under which the wave equation is well-posed in spacetimes with metrics of Lipschitz regularity. In particular, the results can be applied to spacetimes where there is a loss of regularity on a hypersurface such as shell-crossing singularities, thin shells of matter, and surface layers. This provides a framework for regarding gravitational singularities not as obstructions to the world lines of point-particles, but rather as obstruction to the dynamics of test fields.

  10. Demosaicing as the problem of regularization

    NASA Astrophysics Data System (ADS)

    Kunina, Irina; Volkov, Aleksey; Gladilin, Sergey; Nikolaev, Dmitry

    2015-12-01

    Demosaicing is the process of reconstruction of a full-color image from Bayer mosaic, which is used in digital cameras for image formation. This problem is usually considered as an interpolation problem. In this paper, we propose to consider the demosaicing problem as a problem of solving an underdetermined system of algebraic equations using regularization methods. We consider regularization with standard l1/2-, l1 -, l2- norms and their effect on quality image reconstruction. The experimental results showed that the proposed technique can both be used in existing methods and become the base for new ones

  11. Regularizing the divergent structure of light-front currents

    SciTech Connect

    Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2001-04-01

    The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.

  12. COLLIER: A fortran-based complex one-loop library in extended regularizations

    NASA Astrophysics Data System (ADS)

    Denner, Ansgar; Dittmaier, Stefan; Hofer, Lars

    2017-03-01

    We present the library COLLIER for the numerical evaluation of one-loop scalar and tensor integrals in perturbative relativistic quantum field theories. The code provides numerical results for arbitrary tensor and scalar integrals for scattering processes in general quantum field theories. For tensor integrals either the coefficients in a covariant decomposition or the tensor components themselves are provided. COLLIER supports complex masses, which are needed in calculations involving unstable particles. Ultraviolet and infrared singularities are treated in dimensional regularization. For soft and collinear singularities mass regularization is available as an alternative.

  13. Regular Gleason Measures and Generalized Effect Algebras

    NASA Astrophysics Data System (ADS)

    Dvurečenskij, Anatolij; Janda, Jiří

    2015-12-01

    We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.

  14. Regularizing cosmological singularities by varying physical constants

    SciTech Connect

    Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl

    2013-02-01

    Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.

  15. TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION

    PubMed Central

    MEERSCHAERT, M. M.; SCHEFFLER, H.-P.

    2013-01-01

    Karamata’s Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated. PMID:24644367

  16. Regular Nonchaotic Attractors with Positive Plural

    NASA Astrophysics Data System (ADS)

    Zhang, Xu

    2016-12-01

    The study of the strange nonchaotic attractors is an interesting topic, where the dynamics are neither regular nor chaotic (the word chaotic means the positive Lyapunov exponents), and the shape of the attractors has complicated geometry structure, or fractal structure. It is found that in a class of planar first-order nonautonomous systems, it is possible that there exist attractors, where the shape of the attractors is regular, the orbits are transitive on the attractors, and the dynamics are not chaotic. We call this type of attractors as regular nonchaotic attractors with positive plural, which are different from the strange nonchaotic attractors, attracting fixed points, or attracting periodic orbits. Several examples with computer simulations are given. The first two examples have annulus-shaped attractors. Another two examples have disk-shaped attractors. The last two examples with externally driven terms at two incommensurate frequencies have regular nonchaotic attractors with positive plural, implying that the existence of externally driven terms at two incommensurate frequencies might not be the sufficient condition to guarantee that the system has strange nonchaotic attractors.

  17. Generalisation of Regular and Irregular Morphological Patterns.

    ERIC Educational Resources Information Center

    Prasada, Sandeep; and Pinker, Steven

    1993-01-01

    When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call…

  18. Fast Image Reconstruction with L2-Regularization

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar

    2014-01-01

    Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184

  19. Strategies of Teachers in the Regular Classroom

    ERIC Educational Resources Information Center

    De Leeuw, Renske Ria; De Boer, Anke Aaltje

    2016-01-01

    It is known that regular schoolteachers have difficulties in educating students with social, emotional and behavioral difficulties (SEBD), mainly because of their disruptive behavior. In order to manage the disruptive behavior of students with SEBD many advices and strategies are provided in educational literature. However, very little is known…

  20. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  1. On the regularity in some variational problems

    NASA Astrophysics Data System (ADS)

    Ragusa, Maria Alessandra; Tachikawa, Atsushi

    2017-01-01

    Our main goal is the study some regularity results where are considered estimates in Morrey spaces for the derivatives of local minimizers of variational integrals of the form 𝒜 (u ,Ω )= ∫Ω F (x ,u ,D u ) dx where Ω is a bounded domain in ℝm and the integrand F have some different forms.

  2. Prox-regular functions in Hilbert spaces

    NASA Astrophysics Data System (ADS)

    Bernard, Frédéric; Thibault, Lionel

    2005-03-01

    This paper studies the prox-regularity concept for functions in the general context of Hilbert space. In particular, a subdifferential characterization is established as well as several other properties. It is also shown that the Moreau envelopes of such functions are continuously differentiable.

  3. Semantic Gender Assignment Regularities in German

    ERIC Educational Resources Information Center

    Schwichtenberg, Beate; Schiller, Niels O.

    2004-01-01

    Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution…

  4. Starting flow in regular polygonal ducts

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.

    2016-06-01

    The starting flows in regular polygonal ducts of S = 3, 4, 5, 6, 8 sides are determined by the method of eigenfunction superposition. The necessary S-fold symmetric eigenfunctions and eigenvalues of the Helmholtz equation are found either exactly or by boundary point match. The results show the starting time is governed by the first eigenvalue.

  5. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  6. Regularization of turbulence - a comprehensive modeling approach

    NASA Astrophysics Data System (ADS)

    Geurts, B. J.

    2011-12-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.

  7. Regularity of rotational travelling water waves.

    PubMed

    Escher, Joachim

    2012-04-13

    Several recent results on the regularity of streamlines beneath a rotational travelling wave, along with the wave profile itself, will be discussed. The survey includes the classical water wave problem in both finite and infinite depth, capillary waves and solitary waves as well. A common assumption in all models to be discussed is the absence of stagnation points.

  8. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  9. Two-Dimensional Vernier Scale

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1992-01-01

    Modified vernier scale gives accurate two-dimensional coordinates from maps, drawings, or cathode-ray-tube displays. Movable circular overlay rests on fixed rectangular-grid overlay. Pitch of circles nine-tenths that of grid and, for greatest accuracy, radii of circles large compared with pitch of grid. Scale enables user to interpolate between finest divisions of regularly spaced rule simply by observing which mark on auxiliary vernier rule aligns with mark on primary rule.

  10. Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.

    PubMed

    Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2016-06-08

    Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.

  11. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  12. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  13. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  14. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  15. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  16. Charged fermions tunneling from regular black holes

    SciTech Connect

    Sharif, M. Javed, W.

    2012-11-15

    We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.

  17. Superfast Tikhonov Regularization of Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Turnes, Christopher K.; Balcan, Doru; Romberg, Justin

    2014-08-01

    Toeplitz-structured linear systems arise often in practical engineering problems. Correspondingly, a number of algorithms have been developed that exploit Toeplitz structure to gain computational efficiency when solving these systems. The earliest "fast" algorithms for Toeplitz systems required O(n^2) operations, while more recent "superfast" algorithms reduce the cost to O(n (log n)^2) or below. In this work, we present a superfast algorithm for Tikhonov regularization of Toeplitz systems. Using an "extension-and-transformation" technique, our algorithm translates a Tikhonov-regularized Toeplitz system into a type of specialized polynomial problem known as tangential interpolation. Under this formulation, we can compute the solution in only O(n (log n)^2) operations. We use numerical simulations to demonstrate our algorithm's complexity and verify that it returns stable solutions.

  18. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  19. Tracking magnetogram proper motions by multiscale regularization

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.

    1995-01-01

    Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.

  20. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  1. Regularity of nuclear structure under random interactions

    SciTech Connect

    Zhao, Y. M.

    2011-05-06

    In this contribution I present a brief introduction to simplicity out of complexity in nuclear structure, specifically, the regularity of nuclear structure under random interactions. I exemplify such simplicity by two examples: spin-zero ground state dominance and positive parity ground state dominance in even-even nuclei. Then I discuss two recent results of nuclear structure in the presence of random interactions, in collaboration with Prof. Arima. Firstly I discuss sd bosons under random interactions, with the focus on excited states in the yrast band. We find a few regular patterns in these excited levels. Secondly I discuss our recent efforts towards obtaining eigenvalues without diagonalizing the full matrices of the nuclear shell model Hamiltonian.

  2. Power-law regularities in human language

    NASA Astrophysics Data System (ADS)

    Mehri, Ali; Lashkari, Sahar Mohammadpour

    2016-11-01

    Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.

  3. Symmetries and regular behavior of Hamiltonian systems.

    PubMed

    Kozlov, Valeriy V.

    1996-03-01

    The behavior of the phase trajectories of the Hamilton equations is commonly classified as regular and chaotic. Regularity is usually related to the condition for complete integrability, i.e., a Hamiltonian system with n degrees of freedom has n independent integrals in involution. If at the same time the simultaneous integral manifolds are compact, the solutions of the Hamilton equations are quasiperiodic. In particular, the entropy of the Hamiltonian phase flow of a completely integrable system is zero. It is found that there is a broader class of Hamiltonian systems that do not show signs of chaotic behavior. These are systems that allow n commuting "Lagrangian" vector fields, i.e., the symplectic 2-form on each pair of such fields is zero. They include, in particular, Hamiltonian systems with multivalued integrals. (c) 1996 American Institute of Physics.

  4. A regularization approach to hydrofacies delineation

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  5. Speech enhancement using local spectral regularization

    NASA Astrophysics Data System (ADS)

    Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly; Diaz, Arnoldo

    2016-09-01

    A locally-adaptive algorithm for speech enhancement based on local spectral regularization is presented. The algorithm is able to retrieve a clean speech signal from a noisy signal using locally-adaptive signal processing. The proposed algorithm is able to increase the quality of a noisy signal in terms of objective metrics. Computer simulation results obtained with the proposed algorithm are presented and discussed in processing speech signals corrupted with additive noise.

  6. Regular Language Constrained Sequence Alignment Revisited

    NASA Astrophysics Data System (ADS)

    Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal

    Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, Arslan [1] introduced the Regular Language Constrained Sequence Alignment Problem and proposed an O(n 2 t 4) time and O(n 2 t 2) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the non-deterministic automaton, which is given as input. Chung et al. [2] proposed a faster O(n 2 t 3) time algorithm for the same problem. In this paper, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n 2 t 3/logt). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the run time complexity in the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense.

  7. Hyperspectral Image Recovery via Hybrid Regularization

    NASA Astrophysics Data System (ADS)

    Arablouei, Reza; de Hoog, Frank

    2016-12-01

    Natural images tend to mostly consist of smooth regions with individual pixels having highly correlated spectra. This information can be exploited to recover hyperspectral images of natural scenes from their incomplete and noisy measurements. To perform the recovery while taking full advantage of the prior knowledge, we formulate a composite cost function containing a square-error data-fitting term and two distinct regularization terms pertaining to spatial and spectral domains. The regularization for the spatial domain is the sum of total-variation of the image frames corresponding to all spectral bands. The regularization for the spectral domain is the l1-norm of the coefficient matrix obtained by applying a suitable sparsifying transform to the spectra of the pixels. We use an accelerated proximal-subgradient method to minimize the formulated cost function. We analyze the performance of the proposed algorithm and prove its convergence. Numerical simulations using real hyperspectral images exhibit that the proposed algorithm offers an excellent recovery performance with a number of measurements that is only a small fraction of the hyperspectral image data size. Simulation results also show that the proposed algorithm significantly outperforms an accelerated proximal-gradient algorithm that solves the classical basis-pursuit denoising problem to recover the hyperspectral image.

  8. Hyperspectral Image Recovery via Hybrid Regularization.

    PubMed

    Arablouei, Reza; de Hoog, Frank

    2016-09-27

    Natural images tend to mostly consist of smooth regions with individual pixels having highly correlated spectra. This information can be exploited to recover hyperspectral images of natural scenes from their incomplete and noisy measurements. To perform the recovery while taking full advantage of the prior knowledge, we formulate a composite cost function containing a square-error data-fitting term and two distinct regularization terms pertaining to spatial and spectral domains. The regularization for the spatial domain is the sum of total-variation of the image frames corresponding to all spectral bands. The regularization for the spectral domain is the ��������-norm of the coefficient matrix obtained by applying a suitable sparsifying transform to the spectra of the pixels. We use an accelerated proximal-subgradient method to minimize the formulated cost function. We analyse the performance of the proposed algorithm and prove its convergence. Numerical simulations using real hyperspectral images exhibit that the proposed algorithm offers an excellent recovery performance with a number of measurements that is only a small fraction of the hyperspectral image data size. Simulation results also show that the proposed algorithm significantly outperforms an accelerated proximal-gradient algorithm that solves the classical basis-pursuit denoising problem to recover the hyperspectral image.

  9. Guaranteed classification via regularized similarity learning.

    PubMed

    Guo, Zheng-Chu; Ying, Yiming

    2014-03-01

    Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban (2012). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.

  10. Automatic detection of regularly repeating vocalizations

    NASA Astrophysics Data System (ADS)

    Mellinger, David

    2005-09-01

    Many animal species produce repetitive sounds at regular intervals. This regularity can be used for automatic recognition of the sounds, providing improved detection at a given signal-to-noise ratio. Here, the detection of sperm whale sounds is examined. Sperm whales produce highly repetitive ``regular clicks'' at periods of about 0.2-2 s, and faster click trains in certain behavioral contexts. The following detection procedure was tested: a spectrogram was computed; values within a certain frequency band were summed; time windowing was applied; each windowed segment was autocorrelated; and the maximum of the autocorrelation within a certain periodicity range was chosen. This procedure was tested on sets of recordings containing sperm whale sounds and interfering sounds, both low-frequency recordings from autonomous hydrophones and high-frequency ones from towed hydrophone arrays. An optimization procedure iteratively varies detection parameters (spectrogram frame length and frequency range, window length, periodicity range, etc.). Performance of various sets of parameters was measured by setting a standard level of allowable missed calls, and the resulting optimium parameters are described. Performance is also compared to that of a neural network trained using the data sets. The method is also demonstrated for sounds of blue whales, minke whales, and seismic airguns. [Funding from ONR.

  11. Regularization Parameter Selections via Generalized Information Criterion

    PubMed Central

    Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling

    2009-01-01

    We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354

  12. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  13. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  14. On the Global Regularity for the 3D Magnetohydrodynamics Equations Involving Partial Components

    NASA Astrophysics Data System (ADS)

    Qian, Chenyin

    2017-01-01

    In this paper, we study the regularity criteria of the three-dimensional magnetohydrodynamics system in terms of some components of the velocity field and the magnetic field. With a decomposition of the four nonlinear terms of the system, this result gives an improvement of some corresponding previous works (Yamazaki in J Math Fluid Mech 16: 551-570, 2014; Jia and Zhou in Nonlinear Anal Real World Appl 13: 410-418, 2012).

  15. Information-theoretic semi-supervised metric learning via entropy regularization.

    PubMed

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  16. Total variation regularization for fMRI-based prediction of behavior.

    PubMed

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-07-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional magnetic resonance imaging (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioral variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the l(1) norm of the image gradient, also known as its total variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification.

  17. Quaternion regularization and trajectory motion control in celestial mechanics and astrodynamics: II

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    2014-07-01

    Problems of regularization in celestial mechanics and astrodynamics are considered, and basic regular quaternion models for celestial mechanics and astrodynamics are presented. It is shown that the effectiveness of analytical studies and numerical solutions to boundary value problems of controlling the trajectory motion of spacecraft can be improved by using quaternion models of astrodynamics. In this second part of the paper, specific singularity-type features (division by zero) are considered. They result from using classical equations in angular variables (particularly in Euler variables) in celestial mechanics and astrodynamics and can be eliminated by using Euler (Rodrigues-Hamilton) parameters and Hamilton quaternions. Basic regular (in the above sense) quaternion models of celestial mechanics and astrodynamics are considered; these include equations of trajectory motion written in nonholonomic, orbital, and ideal moving trihedrals whose rotational motions are described by Euler parameters and quaternions of turn; and quaternion equations of instantaneous orbit orientation of a celestial body (spacecraft). New quaternion regular equations are derived for the perturbed three-dimensional two-body problem (spacecraft trajectory motion). These equations are constructed using ideal rectangular Hansen coordinates and quaternion variables, and they have additional advantages over those known for regular Kustaanheimo-Stiefel equations.

  18. Convergence and Fluctuations of Regularized Tyler Estimators

    NASA Astrophysics Data System (ADS)

    Kammoun, Abla; Couillet, Romain; Pascal, Ferderic; Alouini, Mohamed-Slim

    2016-02-01

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter $\\rho$. While a high value of $\\rho$ is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations $n$ and/or their size $N$ increase together. First asymptotic results have recently been obtained under the assumption that $N$ and $n$ are large and commensurable. Interestingly, no results concerning the regime of $n$ going to infinity with $N$ fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult $N$ and $n$ large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when $n\\to\\infty$ with $N$ fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter $\\rho$.

  19. Axial Presentations of Regular Arcs on Mn

    PubMed Central

    Morse, Marston

    1972-01-01

    THEOREM 1. Let Mn be a Riemannian manifold of class Cm, m > 0. On Mn let g be a simple compact, sensed, regular arc whose local coordinates are functions of class Cm of the algebraic arc length s, measured along g from a prescribed point of g. There then exists a presentation (F: U, X) [unk] [unk]Mn such that g [unk] X, and each point p(s) of g is represented in the euclidean domain U by coordinates (x1,...,xn) = (s,0,...,0). PMID:16592036

  20. Multichannel image regularization using anisotropic geodesic filtering

    SciTech Connect

    Grazzini, Jacopo A

    2010-01-01

    This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.

  1. Regularization ambiguities in loop quantum gravity

    SciTech Connect

    Perez, Alejandro

    2006-02-15

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find

  2. New Regularization Method for EXAFS Analysis

    SciTech Connect

    Reich, Tatiana Ye.; Reich, Tobias; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry

    2007-02-02

    As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4{center_dot}2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.

  3. Total-variation regularization with bound constraints

    SciTech Connect

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  4. Regularization ambiguities in loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Perez, Alejandro

    2006-02-01

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem—the existence of well-behaved regularization of the constraints—is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant “point-splitting” regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions—due to the difficulties associated to the definition of the physical inner product—it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we

  5. A framework for regularized non-negative matrix factorization, with application to the analysis of gene expression data.

    PubMed

    Taslaman, Leo; Nilsson, Björn

    2012-01-01

    Non-negative matrix factorization (NMF) condenses high-dimensional data into lower-dimensional models subject to the requirement that data can only be added, never subtracted. However, the NMF problem does not have a unique solution, creating a need for additional constraints (regularization constraints) to promote informative solutions. Regularized NMF problems are more complicated than conventional NMF problems, creating a need for computational methods that incorporate the extra constraints in a reliable way. We developed novel methods for regularized NMF based on block-coordinate descent with proximal point modification and a fast optimization procedure over the alpha simplex. Our framework has important advantages in that it (a) accommodates for a wide range of regularization terms, including sparsity-inducing terms like the L1 penalty, (b) guarantees that the solutions satisfy necessary conditions for optimality, ensuring that the results have well-defined numerical meaning, (c) allows the scale of the solution to be controlled exactly, and (d) is computationally efficient. We illustrate the use of our approach on in the context of gene expression microarray data analysis. The improvements described remedy key limitations of previous proposals, strengthen the theoretical basis of regularized NMF, and facilitate the use of regularized NMF in applications.

  6. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  7. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  8. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  9. Nonlinear regularization techniques for seismic tomography

    SciTech Connect

    Loris, I. Douma, H.; Nolet, G.; Regone, C.

    2010-02-01

    The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, l{sub 2} penalties are compared to so-called sparsity promoting l{sub 1} and l{sub 0} penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an l{sub 2} norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer l{sub 1} damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple l{sub 2} minimization ('Tikhonov regularization') which should be avoided. In some of our examples, the l{sub 0} method produced notable artifacts. In addition we show how nonlinear l{sub 1} methods for finding sparse models can be competitive in speed with the widely used l{sub 2} methods, certainly under noisy conditions, so that there is no need to shun l{sub 1} penalizations.

  10. Efficient Regularized Regression with L0 Penalty for Variable Selection and Network Construction

    PubMed Central

    2016-01-01

    Variable selections for regression with high-dimensional big data have found many applications in bioinformatics and computational biology. One appealing approach is the L0 regularized regression which penalizes the number of nonzero features in the model directly. However, it is well known that L0 optimization is NP-hard and computationally challenging. In this paper, we propose efficient EM (L0EM) and dual L0EM (DL0EM) algorithms that directly approximate the L0 optimization problem. While L0EM is efficient with large sample size, DL0EM is efficient with high-dimensional (n ≪ m) data. They also provide a natural solution to all Lp  p ∈ [0,2] problems, including lasso with p = 1 and elastic net with p ∈ [1,2]. The regularized parameter λ can be determined through cross validation or AIC and BIC. We demonstrate our methods through simulation and high-dimensional genomic data. The results indicate that L0 has better performance than lasso, SCAD, and MC+, and L0 with AIC or BIC has similar performance as computationally intensive cross validation. The proposed algorithms are efficient in identifying the nonzero variables with less bias and constructing biologically important networks with high-dimensional big data. PMID:27843486

  11. Hawking fluxes and anomalies in rotating regular black holes with a time-delay

    NASA Astrophysics Data System (ADS)

    Takeuchi, Shingo

    2016-11-01

    Based on the anomaly cancellation method we compute the Hawking fluxes (the Hawking thermal flux and the total flux of energy-momentum tensor) from a four-dimensional rotating regular black hole with a time-delay. To this purpose, in the three metrics proposed in [1], we try to perform the dimensional reduction in which the anomaly cancellation method is feasible at the near-horizon region in a general scalar field theory. As a result we can demonstrate that the dimensional reduction is possible in two of those metrics. Hence we perform the anomaly cancellation method and compute the Hawking fluxes in those two metrics. Our Hawking fluxes involve three effects: (1) quantum gravity effect regularizing the core of the black holes, (2) rotation of the black hole, (3) time-delay. Further in this paper toward the metric in which the dimensional could not be performed, we argue that it would be some problematic metric, and mention its cause. The Hawking fluxes we compute in this study could be considered to correspond to more realistic Hawking fluxes. Further what Hawking fluxes can be obtained from the anomaly cancellation method would be interesting in terms of the relation between a consistency of quantum field theories and black hole thermodynamics.

  12. Laplacian embedded regression for scalable manifold regularization.

    PubMed

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  13. Quantum search algorithms on a regular lattice

    SciTech Connect

    Hein, Birgit; Tanner, Gregor

    2010-07-15

    Quantum algorithms for searching for one or more marked items on a d-dimensional lattice provide an extension of Grover's search algorithm including a spatial component. We demonstrate that these lattice search algorithms can be viewed in terms of the level dynamics near an avoided crossing of a one-parameter family of quantum random walks. We give approximations for both the level splitting at the avoided crossing and the effectively two-dimensional subspace of the full Hilbert space spanning the level crossing. This makes it possible to give the leading order behavior for the search time and the localization probability in the limit of large lattice size including the leading order coefficients. For d=2 and d=3, these coefficients are calculated explicitly. Closed form expressions are given for higher dimensions.

  14. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  15. The Essential Special Education Guide for the Regular Education Teacher

    ERIC Educational Resources Information Center

    Burns, Edward

    2007-01-01

    The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…

  16. Revealing hidden regularities with a general approach to fission

    NASA Astrophysics Data System (ADS)

    Schmidt, Karl-Heinz; Jurado, Beatriz

    2015-12-01

    Selected aspects of a general approach to nuclear fission are described with the focus on the possible benefit of meeting the increasing need of nuclear data for the existing and future emerging nuclear applications. The most prominent features of this approach are the evolution of quantum-mechanical wave functions in systems with complex shape, memory effects in the dynamics of stochastic processes, the influence of the Second Law of thermodynamics on the evolution of open systems in terms of statistical mechanics, and the topological properties of a continuous function in multi-dimensional space. It is demonstrated that this approach allows reproducing the measured fission barriers and the observed properties of the fission fragments and prompt neutrons. Our approach is based on sound physical concepts, as demonstrated by the fact that practically all the parameters have a physical meaning, and reveals a high degree of regularity in the fission observables. Therefore, we expect a good predictive power within the region extending from Po isotopes to Sg isotopes where the model parameters have been adjusted. Our approach can be extended to other regions provided that there is enough empirical information available that allows determining appropriate values of the model parameters. Possibilities for combining this general approach with microscopic models are suggested. These are supposed to enhance the predictive power of the general approach and to help improving or adjusting the microscopic models. This could be a way to overcome the present difficulties for producing evaluations with the required accuracy.

  17. Anderson localization and ergodicity on random regular graphs

    NASA Astrophysics Data System (ADS)

    Tikhonov, K. Â. S.; Mirlin, A. Â. D.; Skvortsov, M. Â. A.

    2016-12-01

    A numerical study of Anderson transition on random regular graphs (RRGs) with diagonal disorder is performed. The problem can be described as a tight-binding model on a lattice with N sites that is locally a tree with constant connectivity. In a certain sense, the RRG ensemble can be seen as an infinite-dimensional (d →∞ ) cousin of the Anderson model in d dimensions. We focus on the delocalized side of the transition and stress the importance of finite-size effects. We show that the data can be interpreted in terms of the finite-size crossover from a small (N ≪Nc ) to a large (N ≫Nc ) system, where Nc is the correlation volume diverging exponentially at the transition. A distinct feature of this crossover is a nonmonotonicity of the spectral and wave-function statistics, which is related to properties of the critical phase in the studied model and renders the finite-size analysis highly nontrivial. Our results support an analytical prediction that states in the delocalized phase (and at N ≫Nc ) are ergodic in the sense that their inverse participation ratio scales as 1 /N .

  18. Charge regularization in phase separating polyelectrolyte solutions.

    PubMed

    Muthukumar, M; Hua, Jing; Kundagrami, Arindam

    2010-02-28

    Theoretical investigations of phase separation in polyelectrolyte solutions have so far assumed that the effective charge of the polyelectrolyte chains is fixed. The ability of the polyelectrolyte chains to self-regulate their effective charge due to the self-consistent coupling between ionization equilibrium and polymer conformations, depending on the dielectric constant, temperature, and polymer concentration, affects the critical phenomena and phase transitions drastically. By considering salt-free polyelectrolyte solutions, we show that the daughter phases have different polymer charges from that of the mother phase. The critical point is also altered significantly by the charge self-regularization of the polymer chains. This work extends the progress made so far in the theory of phase separation of strong polyelectrolyte solutions to a higher level of understanding by considering chains which can self-regulate their charge.

  19. Regularization of Nutation Time Series at GSFC

    NASA Astrophysics Data System (ADS)

    Le Bail, K.; Gipson, J. M.; Bolotin, S.

    2012-12-01

    VLBI is unique in its ability to measure all five Earth orientation parameters. In this paper we focus on the two nutation parameters which characterize the orientation of the Earth's rotation axis in space. We look at the periodicities and the spectral characteristics of these parameters for both R1 and R4 sessions independently. The study of the most significant periodic signals for periods shorter than 600 days is common for these four time series (period of 450 days), and the type of noise determined by the Allan variance is a white noise for the four series. To investigate methods of regularizing the series, we look at a Singular Spectrum Analysis-derived method and at the Kalman filter. The two methods adequately reproduce the tendency of the nutation time series, but the resulting series are noisier using the Singular Spectrum Analysis-derived method.

  20. Thermodynamics of regular accelerating black holes

    NASA Astrophysics Data System (ADS)

    Astorino, Marco

    2017-03-01

    Using the covariant phase space formalism, we compute the conserved charges for a solution, describing an accelerating and electrically charged Reissner-Nordstrom black hole. The metric is regular provided that the acceleration is driven by an external electric field, in spite of the usual string of the standard C-metric. The Smarr formula and the first law of black hole thermodynamics are fulfilled. The resulting mass has the same form of the Christodoulou-Ruffini irreducible mass. On the basis of these results, we can extrapolate the mass and thermodynamics of the rotating C-metric, which describes a Kerr-Newman-(A)dS black hole accelerated by a pulling string.

  1. Regularity of inviscid shell models of turbulence

    NASA Astrophysics Data System (ADS)

    Constantin, Peter; Levant, Boris; Titi, Edriss S.

    2007-01-01

    In this paper we continue the analytical study of the sabra shell model of energy turbulent cascade. We prove the global existence of weak solutions of the inviscid sabra shell model, and show that these solutions are unique for some short interval of time. In addition, we prove that the solutions conserve energy, provided that the components of the solution satisfy ∣un∣≤Ckn-1/3[nlog(n+1)]-1 for some positive absolute constant C , which is the analog of the Onsager’s conjecture for the Euler’s equations. Moreover, we give a Beal-Kato-Majda type criterion for the blow-up of solutions of the inviscid sabra shell model and show the global regularity of the solutions in the “two-dimensional” parameters regime.

  2. Regularity of free boundaries a heuristic retro

    PubMed Central

    Caffarelli, Luis A.; Shahgholian, Henrik

    2015-01-01

    This survey concerns regularity theory of a few free boundary problems that have been developed in the past half a century. Our intention is to bring up different ideas and techniques that constitute the fundamentals of the theory. We shall discuss four different problems, where approaches are somewhat different in each case. Nevertheless, these problems can be divided into two groups: (i) obstacle and thin obstacle problem; (ii) minimal surfaces, and cavitation flow of a perfect fluid. In each case, we shall only discuss the methodology and approaches, giving basic ideas and tools that have been specifically designed and tailored for that particular problem. The survey is kept at a heuristic level with mainly geometric interpretation of the techniques and situations in hand. PMID:26261372

  3. Prereferral Intervention Practices of Regular Classroom Teachers: Implications for Regular and Special Education Preparation.

    ERIC Educational Resources Information Center

    Brown, Joyceanne; And Others

    1991-01-01

    This survey of 201 regular education teachers found that the most frequently used prereferral strategies used to facilitate classroom adjustment and achievement were consultation with other professionals, parent conferences, and behavior management techniques. Elementary teachers implemented more strategies than secondary-level teachers.…

  4. Black hole mimickers: Regular versus singular behavior

    SciTech Connect

    Lemos, Jose P. S.; Zaslavskii, Oleg B.

    2008-07-15

    Black hole mimickers are possible alternatives to black holes; they would look observationally almost like black holes but would have no horizon. The properties in the near-horizon region where gravity is strong can be quite different for both types of objects, but at infinity it could be difficult to discern black holes from their mimickers. To disentangle this possible confusion, we examine the near-horizon properties, and their connection with far away asymptotic properties, of some candidates to black mimickers. We study spherically symmetric uncharged or charged but nonextremal objects, as well as spherically symmetric charged extremal objects. Within the uncharged or charged but nonextremal black hole mimickers, we study nonextremal {epsilon}-wormholes on the threshold of the formation of an event horizon, of which a subclass are called black foils, and gravastars. Within the charged extremal black hole mimickers we study extremal {epsilon}-wormholes on the threshold of the formation of an event horizon, quasi-black holes, and wormholes on the basis of quasi-black holes from Bonnor stars. We elucidate whether or not the objects belonging to these two classes remain regular in the near-horizon limit. The requirement of full regularity, i.e., finite curvature and absence of naked behavior, up to an arbitrary neighborhood of the gravitational radius of the object enables one to rule out potential mimickers in most of the cases. A list ranking the best black hole mimickers up to the worst, both nonextremal and extremal, is as follows: wormholes on the basis of extremal black holes or on the basis of quasi-black holes, quasi-black holes, wormholes on the basis of nonextremal black holes (black foils), and gravastars. Since in observational astrophysics it is difficult to find extremal configurations (the best mimickers in the ranking), whereas nonextremal configurations are really bad mimickers, the task of distinguishing black holes from their mimickers seems to

  5. Regularization for Atmospheric Temperature Retrieval Problems

    NASA Technical Reports Server (NTRS)

    Velez-Reyes, Miguel; Galarza-Galarza, Ruben

    1997-01-01

    Passive remote sensing of the atmosphere is used to determine the atmospheric state. A radiometer measures microwave emissions from earth's atmosphere and surface. The radiance measured by the radiometer is proportional to the brightness temperature. This brightness temperature can be used to estimate atmospheric parameters such as temperature and water vapor content. These quantities are of primary importance for different applications in meteorology, oceanography, and geophysical sciences. Depending on the range in the electromagnetic spectrum being measured by the radiometer and the atmospheric quantities to be estimated, the retrieval or inverse problem of determining atmospheric parameters from brightness temperature might be linear or nonlinear. In most applications, the retrieval problem requires the inversion of a Fredholm integral equation of the first kind making this an ill-posed problem. The numerical solution of the retrieval problem requires the transformation of the continuous problem into a discrete problem. The ill-posedness of the continuous problem translates into ill-conditioning or ill-posedness of the discrete problem. Regularization methods are used to convert the ill-posed problem into a well-posed one. In this paper, we present some results of our work in applying different regularization techniques to atmospheric temperature retrievals using brightness temperatures measured with the SSM/T-1 sensor. Simulation results are presented which show the potential of these techniques to improve temperature retrievals. In particular, no statistical assumptions are needed and the algorithms were capable of correctly estimating the temperature profile corner at the tropopause independent of the initial guess.

  6. Discriminative Elastic-Net Regularized Linear Regression.

    PubMed

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  7. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  8. The connection between regularization operators and support vector kernels.

    PubMed

    Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert

    1998-06-01

    In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

  9. Tunneling into quantum wires: Regularization of the tunneling Hamiltonian and consistency between free and bosonized fermions

    NASA Astrophysics Data System (ADS)

    Filippone, Michele; Brouwer, Piet W.

    2016-12-01

    Tunneling between a point contact and a one-dimensional wire is usually described with the help of a tunneling Hamiltonian that contains a δ function in position space. Whereas the leading-order contribution to the tunneling current is independent of the way this δ function is regularized, higher-order corrections with respect to the tunneling amplitude are known to depend on the regularization. Instead of regularizing the δ function in the tunneling Hamiltonian, one may also obtain a finite tunneling current by invoking the ultraviolet cutoffs in a field-theoretic description of the electrons in the one-dimensional conductor, a procedure that is often used in the literature. For the latter case, we show that standard ultraviolet cutoffs lead to different results for the tunneling current in fermionic and bosonized formulations of the theory, when going beyond leading order in the tunneling amplitude. We show how to recover the standard fermionic result using the formalism of functional bosonization and revisit the tunneling current to leading order in the interacting case.

  10. Error analysis for matrix elastic-net regularization algorithms.

    PubMed

    Li, Hong; Chen, Na; Li, Luoqing

    2012-05-01

    Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.

  11. Preparation of Regular Specimens for Atom Probes

    NASA Technical Reports Server (NTRS)

    Kuhlman, Kim; Wishard, James

    2003-01-01

    A method of preparation of specimens of non-electropolishable materials for analysis by atom probes is being developed as a superior alternative to a prior method. In comparison with the prior method, the present method involves less processing time. Also, whereas the prior method yields irregularly shaped and sized specimens, the present developmental method offers the potential to prepare specimens of regular shape and size. The prior method is called the method of sharp shards because it involves crushing the material of interest and selecting microscopic sharp shards of the material for use as specimens. Each selected shard is oriented with its sharp tip facing away from the tip of a stainless-steel pin and is glued to the tip of the pin by use of silver epoxy. Then the shard is milled by use of a focused ion beam (FIB) to make the shard very thin (relative to its length) and to make its tip sharp enough for atom-probe analysis. The method of sharp shards is extremely time-consuming because the selection of shards must be performed with the help of a microscope, the shards must be positioned on the pins by use of micromanipulators, and the irregularity of size and shape necessitates many hours of FIB milling to sharpen each shard. In the present method, a flat slab of the material of interest (e.g., a polished sample of rock or a coated semiconductor wafer) is mounted in the sample holder of a dicing saw of the type conventionally used to cut individual integrated circuits out of the wafers on which they are fabricated in batches. A saw blade appropriate to the material of interest is selected. The depth of cut and the distance between successive parallel cuts is made such that what is left after the cuts is a series of thin, parallel ridges on a solid base. Then the workpiece is rotated 90 and the pattern of cuts is repeated, leaving behind a square array of square posts on the solid base. The posts can be made regular, long, and thin, as required for samples

  12. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  13. Full Regularity for a C*-ALGEBRA of the Canonical Commutation Relations

    NASA Astrophysics Data System (ADS)

    Grundling, Hendrik; Neeb, Karl-Hermann

    The Weyl algebra — the usual C*-algebra employed to model the canonical commutation relations (CCRs), has a well-known defect, in that it has a large number of representations which are not regular and these cannot model physical fields. Here, we construct explicitly a C*-algebra which can reproduce the CCRs of a countably dimensional symplectic space (S, B) and such that its representation set is exactly the full set of regular representations of the CCRs. This construction uses Blackadar's version of infinite tensor products of nonunital C*-algebras, and it produces a "host algebra" (i.e. a generalized group algebra, explained below) for the σ-representation theory of the Abelian group S where σ(·,·) ≔ eiB(·,·)/2. As an easy application, it then follows that for every regular representation of /line{Δ (S, B)} on a separable Hilbert space, there is a direct integral decomposition of it into irreducible regular representations (a known result).

  14. Compression and regularization with the information bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, Dj; Schwab, David

    Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer.

  15. Determinants of Scanpath Regularity in Reading.

    PubMed

    von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan

    2015-09-01

    Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research.

  16. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  17. Information theoretic regularization in diffuse optical tomography.

    PubMed

    Panagiotou, Christos; Somayajula, Sangeetha; Gibson, Adam P; Schweiger, Martin; Leahy, Richard M; Arridge, Simon R

    2009-05-01

    Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering the parameters of interest involves solving a nonlinear and highly ill-posed inverse problem. This paper examines the possibility of regularizing DOT via the introduction of a priori information from alternative high-resolution anatomical modalities, using the information theory concepts of mutual information (MI) and joint entropy (JE). Such functionals evaluate the similarity between the reconstructed optical image and the prior image while bypassing the multimodality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the two modalities. By introducing structural information, we aim to improve the spatial resolution and quantitative accuracy of the solution. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results using numerical simulations. In addition we compare the performance of MI and JE. Finally, we have adopted a method for fast marginal entropy evaluation and optimization by modifying the objective function and extending it to the JE case. We demonstrate its use on an image reconstruction framework and show significant computational savings.

  18. Flip to Regular Triangulation and Convex Hull.

    PubMed

    Gao, Mingcen; Cao, Thanh-Tung; Tan, Tiow-Seng

    2017-02-01

    Flip is a simple and local operation to transform one triangulation to another. It makes changes only to some neighboring simplices, without considering any attribute or configuration global in nature to the triangulation. Thanks to this characteristic, several flips can be independently applied to different small, non-overlapping regions of one triangulation. Such operation is favored when designing algorithms for data-parallel, massively multithreaded hardware, such as the GPU. However, most existing flip algorithms are designed to be executed sequentially, and usually need some restrictions on the execution order of flips, making them hard to be adapted to parallel computation. In this paper, we present an in depth study of flip algorithms in low dimensions, with the emphasis on the flexibility of their execution order. In particular, we propose a series of provably correct flip algorithms for regular triangulation and convex hull in 2D and 3D, with implementations for both CPUs and GPUs. Our experiment shows that our GPU implementation for constructing these structures from a given point set achieves up to two orders of magnitude of speedup over other popular single-threaded CPU implementation of existing algorithms.

  19. Temporal Regularity of the Environment Drives Time Perception

    PubMed Central

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be varied. In one experiment, we asked whether the last stimulus was regular or not. We found that participants exposed to an irregular environment frequently reported perfectly regularly paced stimuli to be irregular. In a second experiment, we asked participants to judge whether the final stimulus was presented before or after a flash. In this way, we were able to determine distortions in temporal perception as changes in the timing necessary for the sound and the flash to be perceived synchronous. We found that within a regular context, the perceived timing of deviant last stimuli changed so that the relative anisochrony appeared to be perceptually decreased. In the irregular context, the perceived timing of irregular stimuli following a regular sequence was not affected. These observations suggest that humans use temporal expectations to evaluate the regularity of sequences and that expectations are combined with sensory stimuli to adapt perceived timing to follow the statistics of the environment. Expectations can be seen as a-priori probabilities on which perceived timing of stimuli depend. PMID:27441686

  20. Pauli-Villars regularization of field theories on the light front

    SciTech Connect

    Hiller, John R.

    2010-12-22

    Four-dimensional quantum field theories generally require regularization to be well defined. This can be done in various ways, but here we focus on Pauli-Villars (PV) regularization and apply it to nonperturbative calculations of bound states. The philosophy is to introduce enough PV fields to the Lagrangian to regulate the theory perturbatively, including preservation of symmetries, and assume that this is sufficient for the nonperturbative case. The numerical methods usually necessary for nonperturbative bound-state problems are then applied to a finite theory that has the original symmetries. The bound-state problem is formulated as a mass eigenvalue problem in terms of the light-front Hamiltonian. Applications to quantum electrodynamics are discussed.

  1. m-mode regularization scheme for the self-force in Kerr spacetime

    SciTech Connect

    Barack, Leor; Golbourn, Darren A.; Sago, Norichika

    2007-12-15

    We present a new, simple method for calculating the scalar, electromagnetic, and gravitational self-forces acting on particles in orbit around a Kerr black hole. The standard ''mode-sum regularization'' approach for self-force calculations relies on a decomposition of the full (retarded) perturbation field into multipole modes, followed by the application of a certain mode-by-mode regularization procedure. In recent years several groups have developed numerical codes for calculating black hole perturbations directly in 2+1 dimensions (i.e., decomposing the azimuthal dependence into m-modes, but refraining from a full multipole decomposition). Here we formulate a practical scheme for constructing the self-force directly from the 2+1-dimensional m-modes. While the standard mode-sum method is serving well in calculations of the self-force in Schwarzschild geometry, the new scheme should allow a more efficient treatment of the Kerr problem.

  2. Global regularity for certain dissipative hydrodynamical and geophysical systems with an application in control theory

    NASA Astrophysics Data System (ADS)

    Cao, Chongsheng

    In this dissertation, we deal with different properties of the solutions for several dissipative evolution systems. In one we study the regularity, namely, a Gevrey class regularity of the solution for the nonlinear analytic parabolic equations and Navier-Stokes equations on the two dimensional sphere. We prove the instantaneous Gevrey regularity for these systems. In addition, we provide an estimate for the number of determining modes and nodes for the two dimensional turbulent flows on the sphere. Next, we study the existence and uniqueness of the Lake equations, a special shallow water model of a fluid flow in a shallow basin with varying bottom topography. We show that the global existence of weak solutions for these equations with certain degenerate varying bottom topography, i.e., in the presence of beaches. Later we show the uniqueness for the case of non-degenerate but non-regular topography. Finally, we consider a feedback control problem for the Navier-Stokes equations. Namely, we show that in case one is able to design a linear feedback control that stabilizes a stationary solution to the Galerkin approximating scheme of the Navier-Stokes equations then the same feedback controller is, in fact, stabilizing a near by exact steady state of the closed-loop Navier-Stokes equations. It is worth to stressing that all the conditions of this statement are checkable on the computed Galerkin approximating solution. The same result is also true in the context of nonlinear Galerkin methods, which based on the theory of Approximate Inertial Manifolds, and for various other nonlinear dissipative parabolic systems.

  3. About the Regularized Navier Stokes Equations

    NASA Astrophysics Data System (ADS)

    Cannone, Marco; Karch, Grzegorz

    2005-03-01

    The first goal of this paper is to study the large time behavior of solutions to the Cauchy problem for the 3-dimensional incompressible Navier Stokes system. The Marcinkiewicz space L3,∞ is used to prove some asymptotic stability results for solutions with infinite energy. Next, this approach is applied to the analysis of two classical “regularized” Navier Stokes systems. The first one was introduced by J. Leray and consists in “mollifying” the nonlinearity. The second one was proposed by J.-L. Lions, who added the artificial hyper-viscosity (-Δ)ℓ/ 2, ℓ > 2 to the model. It is shown in the present paper that, in the whole space, solutions to those modified models converge as t → ∞ toward solutions of the original Navier Stokes system.

  4. On reductibility of degenerate optimization problems to regular operator equations

    NASA Astrophysics Data System (ADS)

    Bednarczuk, E. M.; Tretyakov, A. A.

    2016-12-01

    We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.

  5. Transient Lunar Phenomena: Regularity and Reality

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin P. S.

    2009-05-01

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ~50% of reports originate from near Aristarchus, ~16% from Plato, ~6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a "feature" as defined). TLP count consistency for these features indicates that ~80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  6. TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY

    SciTech Connect

    Crotts, Arlin P. S.

    2009-05-20

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: {approx}50% of reports originate from near Aristarchus, {approx}16% from Plato, {approx}6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that {approx}80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  7. Analysis of regularized Navier-Stokes equations. I, II

    NASA Technical Reports Server (NTRS)

    Ou, Yuh-Roung; Sritharan, S. S.

    1991-01-01

    A regularized form of the conventional Navier-Stokes equations is analyzed. The global existence and uniqueness are established for two classes of generalized solutions. It is shown that the solution of this regularized system converges to the solution of the conventional Navier-Stokes equations for low Reynolds numbers. Particular attention is given to the structure of attractors characterizing the solutions. Both local and global invariant manifolds are found, and the regularity properties of these manifolds are analyzed.

  8. Regularization of multiplicative iterative algorithms with nonnegative constraint

    NASA Astrophysics Data System (ADS)

    Benvenuto, Federico; Piana, Michele

    2014-03-01

    This paper studies the regularization of the constrained maximum likelihood iterative algorithms applied to incompatible ill-posed linear inverse problems. Specifically, we introduce a novel stopping rule which defines a regularization algorithm for the iterative space reconstruction algorithm in the case of least-squares minimization. Further we show that the same rule regularizes the expectation maximization algorithm in the case of Kullback-Leibler minimization, provided a well-justified modification of the definition of Tikhonov regularization is introduced. The performances of this stopping rule are illustrated in the case of an image reconstruction problem in the x-ray solar astronomy.

  9. Current redistribution in resistor networks: Fat-tail statistics in regular and small-world networks

    NASA Astrophysics Data System (ADS)

    Lehmann, Jörg; Bernasconi, Jakob

    2017-03-01

    The redistribution of electrical currents in resistor networks after single-bond failures is analyzed in terms of current-redistribution factors that are shown to depend only on the topology of the network and on the values of the bond resistances. We investigate the properties of these current-redistribution factors for regular network topologies (e.g., d -dimensional hypercubic lattices) as well as for small-world networks. In particular, we find that the statistics of the current redistribution factors exhibits a fat-tail behavior, which reflects the long-range nature of the current redistribution as determined by Kirchhoff's circuit laws.

  10. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    SciTech Connect

    Casas, E.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  11. Regularity and stability of transition fronts in nonlocal equations with time heterogeneous ignition nonlinearity

    NASA Astrophysics Data System (ADS)

    Shen, Wenxian; Shen, Zhongwei

    2017-03-01

    The present paper is devoted to the investigation of various properties of transition fronts in one-dimensional nonlocal equations in heterogeneous media of ignition type, whose existence has been established by the authors of the present paper in a previous work. It is first shown that transition fronts are continuously differentiable in space with uniformly bounded and uniformly Lipschitz continuous space partial derivative. This is the first time that space regularity of transition fronts in nonlocal equations is ever studied. It is then shown that transition fronts are uniformly steep. Finally, asymptotic stability, in the sense of exponentially attracting front-like initial data, of transition fronts is studied.

  12. MATRIX: a 15 ps resistive interpolation TDC ASIC based on a novel regular structure

    NASA Astrophysics Data System (ADS)

    Mauricio, J.; Gascón, D.; Ciaglia, D.; Gómez, S.; Fernández, G.; Sanuy, A.

    2016-12-01

    This paper presents a 4-channel TDC ASIC with the following features: 15-ps LSB (9.34 ps after calibration), 10-ps jitter, < 4-ps time resolution, up to 10 MHz of sustained input rate per channel, 45 mW of power consumption and very low area (910×215 μm2) in a commercial 180 nm technology. The main contribution of this work is the novel design of the clock interpolation circuitry based on a resistive interpolation mesh circuit (patented), a two-dimensional regular structure with very good properties in terms of power consumption, area and low process variability.

  13. Heavy pair production currents with general quantum numbers in dimensionally regularized nonrelativistic QCD

    SciTech Connect

    Hoang, Andre H.; Ruiz-Femenia, Pedro

    2006-12-01

    We discuss the form and construction of general color singlet heavy particle-antiparticle pair production currents for arbitrary quantum numbers, and issues related to evanescent spin operators and scheme dependences in nonrelativistic QCD in n=3-2{epsilon} dimensions. The anomalous dimensions of the leading interpolating currents for heavy quark and colored scalar pairs in arbitrary {sup 2S+1}L{sub J} angular-spin states are determined at next-to-leading order in the nonrelativistic power counting.

  14. Dynamic MRI using SmooThness Regularization on Manifolds (SToRM)

    PubMed Central

    Poddar, Sunrita; Jacob, Mathews

    2017-01-01

    We introduce a novel algorithm to recover real time dynamic MR images from highly under-sampled k-t space measurements. The proposed scheme models the images in the dynamic dataset as points on a smooth, low dimensional manifold in high dimensional space. We propose to exploit the non-linear and non-local redundancies in the dataset by posing its recovery as a manifold smoothness regularized optimization problem. A navigator acquisition scheme is used to determine the structure of the manifold, or equivalently the associated graph Laplacian matrix. The estimated Laplacian matrix is used to recover the dataset from undersampled measurements. The utility of the proposed scheme is demonstrated by comparisons with state of the art methods in multi-slice real-time cardiac and speech imaging applications. PMID:26685228

  15. The residual method for regularizing ill-posed problems

    PubMed Central

    Grasmair, Markus; Haltmeier, Markus; Scherzer, Otmar

    2011-01-01

    Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances, which can also be applied to non-convex regularization functionals. We provide three examples that show the applicability of our theory. The first example is the regularized solution of linear operator equations on Lp-spaces, where we show that the results of Tikhonov regularization generalize unchanged to the residual method. As a second example, we consider the problem of density estimation from a finite number of sampling points, using the Wasserstein distance as a fidelity term and an entropy measure as regularization term. It is shown that the densities obtained in this way depend continuously on the location of the sampled points and that the underlying density can be recovered as the number of sampling points tends to infinity. Finally, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions. PMID:22345828

  16. Fundamental and Regular Elementary Schools: Do Differences Exist?

    ERIC Educational Resources Information Center

    Weber, Larry J.; And Others

    This study compared the academic achievement and other outcomes of three public fundamental elementary schools with three regular elementary schools in a metropolitan school district. Modeled after the John Marshal Fundamental School in Pasadena, California, which opened in the fall of 1973, fundamental schools differ from regular schools in that…

  17. Analysis of regularized Navier-Stokes equations, 2

    NASA Technical Reports Server (NTRS)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  18. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  19. Regular and Special Educators Inservice: A Model of Cooperative Effort.

    ERIC Educational Resources Information Center

    van Duyne, H. John; And Others

    The Regular Education Inservice Program (REIT) at Bowling Green State University (Ohio) assists instructional resource centers (IRC's) and local educational agencies (LEA's) in developing and implementing inservice non-degree programs which respond to the mandates of Public Law 94-142. The target population is regular education personnel working…

  20. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    ERIC Educational Resources Information Center

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  1. 39 CFR 3010.7 - Schedule of regular rate changes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Schedule of regular rate changes. 3010.7 Section... PRODUCTS General Provisions § 3010.7 Schedule of regular rate changes. (a) The Postal Service shall... estimated implementation dates for future Type 1-A rate changes for each separate class of mail, should...

  2. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    PubMed

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  3. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...

  4. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...

  5. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...

  6. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...

  7. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  8. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  9. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  10. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  11. 29 CFR 778.500 - Artificial regular rates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...

  12. 29 CFR 778.408 - The specified regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... employee's compensation. Suppose, for example, that the compensation of an employee is normally made up in...) with an employee whose regular weekly earnings are made up in part by the payment of regular bonuses... compensation, over and above the guaranteed amount, by way of extra premiums for work on holidays, or...

  13. Dimensional Duality

    SciTech Connect

    Green, Daniel; Lawrence, Albion; McGreevy, John; Morrison, David R.; Silverstein, Eva; /SLAC /Stanford U., Phys. Dept.

    2007-05-18

    We show that string theory on a compact negatively curved manifold, preserving a U(1)b1 winding symmetry, grows at least b1 new effective dimensions as the space shrinks. The winding currents yield a ''D-dual'' description of a Riemann surface of genus h in terms of its 2h dimensional Jacobian torus, perturbed by a closed string tachyon arising as a potential energy term in the worldsheet sigma model. D-branes on such negatively curved manifolds also reveal this structure, with a classical moduli space consisting of a b{sub 1}-torus. In particular, we present an AdS/CFT system which offers a non-perturbative formulation of such supercritical backgrounds. Finally, we discuss generalizations of this new string duality.

  14. Regular expression order-sorted unification and matching

    PubMed Central

    Kutsia, Temur; Marin, Mircea

    2015-01-01

    We extend order-sorted unification by permitting regular expression sorts for variables and in the domains of function symbols. The obtained signature corresponds to a finite bottom-up unranked tree automaton. We prove that regular expression order-sorted (REOS) unification is of type infinitary and decidable. The unification problem presented by us generalizes some known problems, such as, e.g., order-sorted unification for ranked terms, sequence unification, and word unification with regular constraints. Decidability of REOS unification implies that sequence unification with regular hedge language constraints is decidable, generalizing the decidability result of word unification with regular constraints to terms. A sort weakening algorithm helps to construct a minimal complete set of REOS unifiers from the solutions of sequence unification problems. Moreover, we design a complete algorithm for REOS matching, and show that this problem is NP-complete and the corresponding counting problem is #P-complete. PMID:26523088

  15. Nonconvex regularizations in fluorescence molecular tomography for sparsity enhancement

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Li, Changqing

    2014-06-01

    In vivo fluorescence imaging has been a popular functional imaging modality in preclinical imaging. Near infrared probes used in fluorescence molecular tomography (FMT) are designed to localize in the targeted tissues, hence sparse solution to the FMT image reconstruction problem is preferred. Nonconvex regularization methods are reported to enhance sparsity in the fields of statistical learning, compressed sensing etc. We investigated such regularization methods in FMT for small animal imaging with numerical simulations and phantom experiments. We adopted a majorization-minimization algorithm for the iterative reconstruction process and compared the reconstructed images using our proposed nonconvex regularizations with those using the well known L1 regularization. We found that the proposed nonconvex methods outperform L1 regularization in accurately recovering sparse targets in FMT.

  16. Group-sparsity regularization for ill-posed subsurface flow inverse problems

    NASA Astrophysics Data System (ADS)

    Golmohammadi, Azarang; Khaninezhad, Mohammad-Reza M.; Jafarpour, Behnam

    2015-10-01

    Sparse representations provide a flexible and parsimonious description of high-dimensional model parameters for reconstructing subsurface flow property distributions from limited data. To further constrain ill-posed inverse problems, group-sparsity regularization can take advantage of possible relations among the entries of unknown sparse parameters when: (i) groups of sparse elements are either collectively active or inactive and (ii) only a small subset of the groups is needed to approximate the parameters of interest. Since subsurface properties exhibit strong spatial connectivity patterns they may lead to sparse descriptions that satisfy the above conditions. When these conditions are established, a group-sparsity regularization can be invoked to facilitate the solution of the resulting inverse problem by promoting sparsity across the groups. The proposed regularization penalizes the number of groups that are active without promoting sparsity within each group. Two implementations are presented in this paper: one based on the multiresolution tree structure of Wavelet decomposition, without a need for explicit prior models, and another learned from explicit prior model realizations using sparse principal component analysis (SPCA). In each case, the approach first classifies the parameters of the inverse problem into groups with specific connectivity features, and then takes advantage of the grouped structure to recover the relevant patterns in the solution from the flow data. Several numerical experiments are presented to demonstrate the advantages of additional constraining power of group-sparsity in solving ill-posed subsurface model calibration problems.

  17. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS’s hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs’ spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets. PMID:25905027

  18. Encoding of configural regularity in the human visual system.

    PubMed

    Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P

    2014-08-13

    The visual system is very efficient in encoding stimulus properties by utilizing available regularities in the inputs. To explore the underlying encoding strategies during visual information processing, we presented participants with two-line configurations that varied in the amount of configural regularity (or degrees of freedom in the relative positioning of the two lines) in a fMRI experiment. Configural regularity ranged from a generic configuration to stimuli resembling an "L" (i.e., a right-angle L-junction), a "T" (i.e., a right-angle midpoint T-junction), or a "+",-the latter being the most regular stimulus. We found that the response strength in the shape-selective lateral occipital area was consistently lower for a higher degree of regularity in the stimuli. In the second experiment, using multivoxel pattern analysis, we further show that regularity is encoded in terms of the fMRI signal strength but not in the distributed pattern of responses. Finally, we found that the results of these experiments could not be accounted for by low-level stimulus properties and are distinct from norm-based encoding. Our results suggest that regularity plays an important role in stimulus encoding in the ventral visual processing stream.

  19. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  20. Learning rates of lq coefficient regularization learning with gaussian kernel.

    PubMed

    Lin, Shaobo; Zeng, Jinshan; Fang, Jian; Xu, Zongben

    2014-10-01

    Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l(q) regularization schemes with 0 < q < ∞ are central in use. It is known that different q leads to different properties of the deduced estimators, say, l(2) regularization leads to a smooth estimator, while l(1) regularization leads to a sparse estimator. Then how the generalization capability of l(q) regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l(q) coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all 0 < q < ∞. That is, the upper and lower bounds of learning rates for l(q) regularization learning are asymptotically identical for all 0 < q < ∞. Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.

  1. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  2. Regular Patterns in Cerebellar Purkinje Cell Simple Spike Trains

    PubMed Central

    Shin, Soon-Lim; Hoebeek, Freek E.; Schonewille, Martijn; De Zeeuw, Chris I.; Aertsen, Ad; De Schutter, Erik

    2007-01-01

    Background Cerebellar Purkinje cells (PC) in vivo are commonly reported to generate irregular spike trains, documented by high coefficients of variation of interspike-intervals (ISI). In strong contrast, they fire very regularly in the in vitro slice preparation. We studied the nature of this difference in firing properties by focusing on short-term variability and its dependence on behavioral state. Methodology/Principal Findings Using an analysis based on CV2 values, we could isolate precise regular spiking patterns, lasting up to hundreds of milliseconds, in PC simple spike trains recorded in both anesthetized and awake rodents. Regular spike patterns, defined by low variability of successive ISIs, comprised over half of the spikes, showed a wide range of mean ISIs, and were affected by behavioral state and tactile stimulation. Interestingly, regular patterns often coincided in nearby Purkinje cells without precise synchronization of individual spikes. Regular patterns exclusively appeared during the up state of the PC membrane potential, while single ISIs occurred both during up and down states. Possible functional consequences of regular spike patterns were investigated by modeling the synaptic conductance in neurons of the deep cerebellar nuclei (DCN). Simulations showed that these regular patterns caused epochs of relatively constant synaptic conductance in DCN neurons. Conclusions/Significance Our findings indicate that the apparent irregularity in cerebellar PC simple spike trains in vivo is most likely caused by mixing of different regular spike patterns, separated by single long intervals, over time. We propose that PCs may signal information, at least in part, in regular spike patterns to downstream DCN neurons. PMID:17534435

  3. Blind image deblurring with edge enhancing total variation regularization

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Hong, Hanyu; Song, Jie; Hua, Xia

    2015-04-01

    Blind image deblurring is an important issue. In this paper, we focus on solving this issue by constrained regularization method. Motivated by the importance of edges to visual perception, the edge-enhancing indicator is introduced to constrain the total variation regularization, and the bilateral filter is used for edge-preserving smoothing. The proposed edge enhancing regularization method aims to smooth preferably within each region and preserve edges. Experiments on simulated and real motion blurred images show that the proposed method is competitive with recent state-of-the-art total variation methods.

  4. Universality in the flooding of regular islands by chaotic states.

    PubMed

    Bäcker, Arnd; Ketzmerick, Roland; Monastra, Alejandro G

    2007-06-01

    We investigate the structure of eigenstates in systems with a mixed phase space in terms of their projection onto individual regular tori. Depending on dynamical tunneling rates and the Heisenberg time, regular states disappear and chaotic states flood the regular tori. For a quantitative understanding we introduce a random matrix model. The resulting statistical properties of eigenstates as a function of an effective coupling strength are in very good agreement with numerical results for a kicked system. We discuss the implications of these results for the applicability of the semiclassical eigenfunction hypothesis.

  5. Exploring the spectrum of regularized bosonic string theory

    SciTech Connect

    Ambjørn, J. Makeenko, Y.

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  6. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  7. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object.

  8. Uniform Deterministic Discrete Method for three dimensional systems

    NASA Astrophysics Data System (ADS)

    Li, Ben-Wen; Tao, Wen-Quan; Nie, Yu-Hong

    1997-06-01

    For radiative direct exchange areas in three dimensional system, the Uniform Deterministic Discrete Method (UDDM) was adopted. The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs. The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numerical accuracy.

  9. A novel regularized edge-preserving super-resolution algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng

    2013-09-01

    Using super-resolution (SR) technology is a good approach to obtain high-resolution infrared image. However, Image super-resolution reconstruction is essentially an ill-posed problem, it is important to design an effective regularization term (image prior). Gaussian prior is widely used in the regularization term, but the reconstructed SR image becomes over-smoothness. Here, a novel regularization term called non-local means (NLM) term is derived based on the assumption that the natural image content is likely to repeat itself within some neighborhood. In the proposed framework, the estimated high image is obtained by minimizing a cost function. The iteration method is applied to solve the optimum problem. With the progress of iteration, the regularization term is adaptively updated. The proposed algorithm has been tested in several experiments. The experimental results show that the proposed approach is robust and can reconstruct higher quality images both in quantitative term and perceptual effect.

  10. Thermodynamical Stability of a New Regular Black Hole

    NASA Astrophysics Data System (ADS)

    Saadat, Hassan

    2013-09-01

    In this paper we consider a new regular black hole and calculate thermodynamical variables such as entropy, specific heat and free energy. Then we study thermodynamical stability of this black hole by using the specific heat in constant volume.

  11. Spelling-stress regularity effects are intact in developmental dyslexia.

    PubMed

    Mundy, Ian R; Carroll, Julia M

    2013-01-01

    The current experiment investigated conflicting predictions regarding the effects of spelling-stress regularity on the lexical decision performance of skilled adult readers and adults with developmental dyslexia. In both reading groups, lexical decision responses were significantly faster and significantly more accurate when the orthographic structure of a word ending was a reliable as opposed to an unreliable predictor of lexical stress assignment. Furthermore, the magnitude of this spelling-stress regularity effect was found to be equivalent across reading groups. These findings are consistent with intact phoneme-level regularity effects also observed in dyslexia. The paper discusses how findings of intact spelling-sound regularity effects at both prosodic and phonemic levels, as well as other similar results, can be reconciled with the obvious difficulties that people with dyslexia experience in other domains of phonological processing.

  12. 39 CFR 3010.7 - Schedule of regular rate changes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... shall display the Schedule for Regular and Predictable Rate Changes on the Commission Web site, http... of mailers of each class of mail in developing the schedule. (e) Whenever the Postal Service deems...

  13. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  14. Mini-Stroke vs. Regular Stroke: What's the Difference?

    MedlinePlus

    ... How is a ministroke different from a regular stroke? Answers from Jerry W. Swanson, M.D. When ... brain, spinal cord or retina, which may cause stroke-like symptoms but does not damage brain cells ...

  15. On almost regularity and π-normality of topological spaces

    NASA Astrophysics Data System (ADS)

    Saad Thabit, Sadeq Ali; Kamarulhaili, Hailiza

    2012-05-01

    π-Normality is a weaker version of normality. It was introduced by Kalantan in 2008. π-Normality lies between normality and almost normality (resp. quasi-normality). The importance of this topological property is that it behaves slightly different from normality and almost normality (quasi-normality). π-Normality is neither a productive nor a hereditary property in general. In this paper, some properties of almost regular spaces are presented. In particular, a few results on almost regular spaces are improved. Some relationships between almost regularity and π-normality are presented. π-Generalized closed sets are used to obtain a characterization and preservation theorems of π-normal spaces. Also, we investigate that an almost regular Lindelöf space (resp. with σ-locally finite base) is not necessarily π-normal by giving two counterexamples. An almost normality of the Rational Sequence topology is proved.

  16. Are Pupils in Special Education Too "Special" for Regular Education?

    NASA Astrophysics Data System (ADS)

    Pijl, Ysbrand J.; Pijl, Sip J.

    1998-01-01

    In the Netherlands special needs pupils are often referred to separate schools for the Educable Mentally Retarded (EMR) or the Learning Disabled (LD). There is an ongoing debate on how to reduce the growing numbers of special education placements. One of the main issues in this debate concerns the size of the difference in cognitive abilities between pupils in regular education and those eligible for LD or EMR education. In this study meta-analysis techniques were used to synthesize the findings from 31 studies on differences between pupils in regular primary education and those in special education in the Netherlands. Studies were grouped into three categories according to the type of measurements used: achievement, general intelligence and neuropsychological tests. It was found that pupils in regular education and those in special education differ in achievement and general intelligence. Pupils in schools for the educable mentally retarded in particular perform at a much lower level than is common in regular Dutch primary education.

  17. Loop Invariants, Exploration of Regularities, and Mathematical Games.

    ERIC Educational Resources Information Center

    Ginat, David

    2001-01-01

    Presents an approach for illustrating, on an intuitive level, the significance of loop invariants for algorithm design and analysis. The illustration is based on mathematical games that require the exploration of regularities via problem-solving heuristics. (Author/MM)

  18. On maximal parabolic regularity for non-autonomous parabolic operators

    NASA Astrophysics Data System (ADS)

    Disser, Karoline; ter Elst, A. F. M.; Rehberg, Joachim

    2017-02-01

    We consider linear inhomogeneous non-autonomous parabolic problems associated to sesquilinear forms, with discontinuous dependence of time. We show that for these problems, the property of maximal parabolic regularity can be extrapolated to time integrability exponents r ≠ 2. This allows us to prove maximal parabolic Lr-regularity for discontinuous non-autonomous second-order divergence form operators in very general geometric settings and to prove existence results for related quasilinear equations.

  19. Deaths in the UK Regular Armed Forces 2006

    DTIC Science & Technology

    2007-03-30

    DEATHS IN THE UK REGULAR ARMED FORCES 2006 INTRODUCTION • This National Statistic Notice provides summary statistics on deaths in 2006...categories of cause of death for 2006 (Table 2 and Figure 2). • Several changes have been made in the presentation of data from previous years. As...the Brigade of Gurkhas is part of the regular Army this Notice has been amended to include both the numbers of deaths for Gurkhas and the age

  20. Regular satellite formation and evolution in a dead zone

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Martin, Rebecca G.

    2017-01-01

    The dead zone in a circumplanetary disk is a non-turbulent region at the disk midplane that is an ideal location for regular satellite formation. The lower viscosity in the dead zone allows small objects to accrete and grow. We model the evolution of a circumplanetary disk with a dead zone for a range of disk and dead zone parameters. We investigate how these affect the formation and subsequent evolution of regular satellites that form in the disk.

  1. Iterative CT reconstruction using shearlet-based regularization

    NASA Astrophysics Data System (ADS)

    Vandeghinste, Bert; Goossens, Bart; Van Holen, Roel; Vanhove, Christian; Pizurica, Aleksandra; Vandenberghe, Stefaan; Staelens, Steven

    2012-03-01

    In computerized tomography, it is important to reduce the image noise without increasing the acquisition dose. Extensive research has been done into total variation minimization for image denoising and sparse-view reconstruction. However, TV minimization methods show superior denoising performance for simple images (with little texture), but result in texture information loss when applied to more complex images. Since in medical imaging, we are often confronted with textured images, it might not be beneficial to use TV. Our objective is to find a regularization term outperforming TV for sparse-view reconstruction and image denoising in general. A recent efficient solver was developed for convex problems, based on a split-Bregman approach, able to incorporate regularization terms different from TV. In this work, a proof-of-concept study demonstrates the usage of the discrete shearlet transform as a sparsifying transform within this solver for CT reconstructions. In particular, the regularization term is the 1-norm of the shearlet coefficients. We compared our newly developed shearlet approach to traditional TV on both sparse-view and on low-count simulated and measured preclinical data. Shearlet-based regularization does not outperform TV-based regularization for all datasets. Reconstructed images exhibit small aliasing artifacts in sparse-view reconstruction problems, but show no staircasing effect. This results in a slightly higher resolution than with TV-based regularization.

  2. The relationship between lifestyle regularity and subjective sleep quality

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  3. Chaotic Advection in a Bounded 3-Dimensional Potential Flow

    NASA Astrophysics Data System (ADS)

    Metcalfe, Guy; Smith, Lachlan; Lester, Daniel

    2012-11-01

    3-dimensional potential, or Darcy flows, are central to understanding and designing laminar transport in porous media; however, chaotic advection in 3-dimensional, volume-preserving flows is still not well understood. We show results of advecting passive scalars in a transient 3-dimensional potential flow that consists of a steady dipole flow and periodic reorientation. Even for the most symmetric reorientation protocol, neither of the two invarients of the motion are conserved; however, one invarient is closely shadowed by a surface of revolution constructed from particle paths of the steady flow, creating in practice an adiabatic surface. A consequence is that chaotic regions cover 3-dimensional space, though tubular regular regions are still transport barriers. This appears to be a new mechanism generating 3-dimensional chaotic orbits. These results contast with the experimental and theoretical results for chaotic scalar transport in 2-dimensional Darcy flows. Wiggins, J. Fluid Mech. 654 (2010).

  4. The ARM Best Estimate 2-dimensional Gridded Surface

    SciTech Connect

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  5. Unexpected Regularity in Swimming Behavior of Clausocalanus furcatus Revealed by a Telecentric 3D Computer Vision System

    PubMed Central

    Bianco, Giuseppe; Botte, Vincenzo; Dubroca, Laurent; Ribera d’Alcalà, Maurizio; Mazzocchi, Maria Grazia

    2013-01-01

    Planktonic copepods display a large repertoire of motion behaviors in a three-dimensional environment. Two-dimensional video observations demonstrated that the small copepod Clausocalanus furcatus, one the most widely distributed calanoids at low to medium latitudes, presented a unique swimming behavior that was continuous and fast and followed notably convoluted trajectories. Furthermore, previous observations indicated that the motion of C. furcatus resembled a random process. We characterized the swimming behavior of this species in three-dimensional space using a video system equipped with telecentric lenses, which allow tracking of zooplankton without the distortion errors inherent in common lenses. Our observations revealed unexpected regularities in the behavior of C. furcatus that appear primarily in the horizontal plane and could not have been identified in previous observations based on lateral views. Our results indicate that the swimming behavior of C. furcatus is based on a limited repertoire of basic kinematic modules but exhibits greater plasticity than previously thought. PMID:23826331

  6. Elastic-net regularization versus ℓ 1-regularization for linear inverse problems with quasi-sparse solutions

    NASA Astrophysics Data System (ADS)

    Chen, De-Han; Hofmann, Bernd; Zou, Jun

    2017-01-01

    We consider the ill-posed operator equation Ax  =  y with an injective and bounded linear operator A mapping between {{\\ell}2} and a Hilbert space Y, possessing the unique solution {{x}\\dagger}=≤ft\\{{{x}\\dagger}k\\right\\}k=1∞ . For the cases that sparsity {{x}\\dagger}\\in {{\\ell}0} is expected but often slightly violated in practice, we investigate in comparison with the {{\\ell}1} -regularization the elastic-net regularization, where the penalty is a weighted superposition of the {{\\ell}1} -norm and the {{\\ell}2} -norm square, under the assumption that {{x}\\dagger}\\in {{\\ell}1} . There occur two positive parameters in this approach, the weight parameter η and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in {{\\ell}1} -regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator A and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay {{x}\\dagger}k\\to 0 for k\\to ∞ and the classical smoothness properties of {{x}\\dagger} as an element in {{\\ell}2} .

  7. Resampling Images to a Regular Grid From a Non-Regular Subset of Pixel Positions Using Frequency Selective Reconstruction.

    PubMed

    Seiler, Jurgen; Jonscher, Markus; Schöberl, Michael; Kaup, André

    2015-11-01

    Even though image signals are typically defined on a regular 2D grid, there also exist many scenarios where this is not the case and the amplitude of the image signal only is available for a non-regular subset of pixel positions. In such a case, a resampling of the image to a regular grid has to be carried out. This is necessary since almost all algorithms and technologies for processing, transmitting or displaying image signals rely on the samples being available on a regular grid. Thus, it is of great importance to reconstruct the image on this regular grid, so that the reconstruction comes closest to the case that the signal has been originally acquired on the regular grid. In this paper, Frequency Selective Reconstruction is introduced for solving this challenging task. This algorithm reconstructs image signals by exploiting the property that small areas of images can be represented sparsely in the Fourier domain. By further considering the basic properties of the optical transfer function of imaging systems, a sparse model of the signal is iteratively generated. In doing so, the proposed algorithm is able to achieve a very high reconstruction quality, in terms of peak signal-to-noise ratio (PSNR) and structural similarity measure as well as in terms of visual quality. The simulation results show that the proposed algorithm is able to outperform state-of-the-art reconstruction algorithms and gains of more than 1 dB PSNR are possible.

  8. Regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular salmeterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and FDA submissions in relation to salmeterol. The date of the most recent search was August 2011. Selection criteria We included controlled parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular salmeterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 26 trials comparing salmeterol to placebo and eight trials comparing with salbutamol. These included 62,815 participants with asthma (including 2,599 children). In six trials (2,766 patients), no serious adverse event data could be obtained. All-cause mortality was higher with regular salmeterol than placebo but the increase was not significant (Peto odds ratio (OR) 1.33 (95% CI 0.85 to 2.08)). Non-fatal serious adverse events were significantly increased when regular salmeterol was compared with placebo (OR 1.15 95% CI 1.02 to 1.29). One extra serious adverse event occurred over 28 weeks for every 188 people

  9. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    SciTech Connect

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  10. Multiscale regularized reconstruction for enhancing microcalcification in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir; Zhou, Chuan

    2012-03-01

    Digital breast tomosynthesis (DBT) holds strong promise for improving the sensitivity of detecting subtle mass lesions. Detection of microcalcifications is more difficult because of high noise and subtle signals in the large DBT volume. It is important to enhance the contrast-to-noise ratio (CNR) of microcalcifications in DBT reconstruction. A major challenge of implementing microcalcification enhancement or noise regularization in DBT reconstruction is to preserve the image quality of masses, especially those with ill-defined margins and subtle spiculations. We are developing a new multiscale regularization (MSR) method for the simultaneous algebraic reconstruction technique (SART) to improve the CNR of microcalcifications without compromising the quality of masses. Each DBT slice is stratified into different frequency bands via wavelet decomposition and the regularization method applies different degrees of regularization to different frequency bands to preserve features of interest and suppress noise. Regularization is constrained by a characteristic map to avoid smoothing subtle microcalcifications. The characteristic map is generated via image feature analysis to identify potential microcalcification locations in the DBT volume. The MSR method was compared to the non-convex total pvariation (TpV) method and SART with no regularization (NR) in terms of the CNR and the full width at half maximum of the line profiles intersecting calcifications and mass spiculations in DBT of human subjects. The results demonstrated that SART regularized by the MSR method was superior to the TpV method for subtle microcalcifications in terms of CNR enhancement. The MSR method preserved the quality of subtle spiculations better than the TpV method in comparison to NR.

  11. Another look at statistical learning theory and regularization.

    PubMed

    Cherkassky, Vladimir; Ma, Yunqian

    2009-09-01

    The paper reviews and highlights distinctions between function-approximation (FA) and VC theory and methodology, mainly within the setting of regression problems and a squared-error loss function, and illustrates empirically the differences between the two when data is sparse and/or input distribution is non-uniform. In FA theory, the goal is to estimate an unknown true dependency (or 'target' function) in regression problems, or posterior probability P(y/x) in classification problems. In VC theory, the goal is to 'imitate' unknown target function, in the sense of minimization of prediction risk or good 'generalization'. That is, the result of VC learning depends on (unknown) input distribution, while that of FA does not. This distinction is important because regularization theory originally introduced under clearly stated FA setting [Tikhonov, N. (1963). On solving ill-posed problem and method of regularization. Doklady Akademii Nauk USSR, 153, 501-504; Tikhonov, N., & V. Y. Arsenin (1977). Solution of ill-posed problems. Washington, DC: W. H. Winston], has been later used under risk-minimization or VC setting. More recently, several authors [Evgeniou, T., Pontil, M., & Poggio, T. (2000). Regularization networks and support vector machines. Advances in Computational Mathematics, 13, 1-50; Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference and prediction. Springer; Poggio, T. and Smale, S., (2003). The mathematics of learning: Dealing with data. Notices of the AMS, 50 (5), 537-544] applied constructive methodology based on regularization framework to learning dependencies from data (under VC-theoretical setting). However, such regularization-based learning is usually presented as a purely constructive methodology (with no clearly stated problem setting). This paper compares FA/regularization and VC/risk minimization methodologies in terms of underlying theoretical assumptions. The control of model

  12. Early family regularity protects against later disruptive behavior.

    PubMed

    Rijlaarsdam, Jolien; Tiemeier, Henning; Ringoot, Ank P; Ivanova, Masha Y; Jaddoe, Vincent W V; Verhulst, Frank C; Roza, Sabine J

    2016-07-01

    Infants' temperamental anger or frustration reactions are highly stable, but are also influenced by maturation and experience. It is yet unclear why some infants high in anger or frustration reactions develop disruptive behavior problems whereas others do not. We examined family regularity, conceptualized as the consistency of mealtime and bedtime routines, as a protective factor against the development of oppositional and aggressive behavior. This study used prospectively collected data from 3136 families participating in the Generation R Study. Infant anger or frustration reactions and family regularity were reported by mothers when children were ages 6 months and 2-4 years, respectively. Multiple informants (parents, teachers, and children) and methods (questionnaire and interview) were used in the assessment of children's oppositional and aggressive behavior at age 6. Higher levels of family regularity were associated with lower levels of child aggression independent of temperamental anger or frustration reactions (β = -0.05, p = 0.003). The association between child oppositional behavior and temperamental anger or frustration reactions was moderated by family regularity and child gender (β = 0.11, p = 0.046): family regularity reduced the risk for oppositional behavior among those boys who showed anger or frustration reactions in infancy. In conclusion, family regularity reduced the risk for child aggression and showed a gender-specific protective effect against child oppositional behavior associated with anger or frustration reactions. Families that ensured regularity of mealtime and bedtime routines buffered their infant sons high in anger or frustration reactions from developing oppositional behavior.

  13. Particle motion and Penrose processes around rotating regular black hole

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.

  14. Regular treatment with formoterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular formoterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and Food and Drug Administration (FDA) submissions in relation to formoterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular formoterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second author checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 22 studies (8032 participants) comparing regular formoterol to placebo and salbutamol. Non-fatal serious adverse event data could be obtained for all participants from published studies comparing formoterol and placebo but only 80% of those comparing formoterol with salbutamol or terbutaline. Three deaths occurred on regular formoterol and none on placebo; this difference was not statistically significant. It was not possible to assess disease-specific mortality in view of the small number of deaths. Non-fatal serious adverse events were significantly increased when

  15. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  16. Nonparametric Tikhonov Regularized NMF and Its Application in Cancer Clustering.

    PubMed

    Mirzal, Andri

    2014-01-01

    The Tikhonov regularized nonnegative matrix factorization (TNMF) is an NMF objective function that enforces smoothness on the computed solutions, and has been successfully applied to many problem domains including text mining, spectral data analysis, and cancer clustering. There is, however, an issue that is still insufficiently addressed in the development of TNMF algorithms, i.e., how to develop mechanisms that can learn the regularization parameters directly from the data sets. The common approach is to use fixed values based on a priori knowledge about the problem domains. However, from the linear inverse problems study it is known that the quality of the solutions of the Tikhonov regularized least square problems depends heavily on the choosing of appropriate regularization parameters. Since least squares are the building blocks of the NMF, it can be expected that similar situation also applies to the NMF. In this paper, we propose two formulas to automatically learn the regularization parameters from the data set based on the L-curve approach. We also develop a convergent algorithm for the TNMF based on the additive update rules. Finally, we demonstrate the use of the proposed algorithm in cancer clustering tasks.

  17. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  18. Rotating Hayward's regular black hole as particle accelerator

    NASA Astrophysics Data System (ADS)

    Amir, Muhammed; Ghosh, Sushant G.

    2015-07-01

    Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy ( E CM) when the collision takes place near the horizon. The rotating Hayward's regular black hole, apart from Mass ( M) and angular momentum ( a), has a new parameter g ( g > 0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M = 1, there exist critical a E and r {/H E }, which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r {/H E } increases with increase in g. While a < a E describe a regular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward's regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM, which increases with the deviation parameter g.

  19. Regularization of inverse planning for intensity-modulated radiotherapy.

    PubMed

    Chvetsov, Alexei V; Calvetti, Daniela; Sohn, Jason W; Kinsella, Timothy J

    2005-02-01

    The performance of a variational regularization technique to improve robustness of inverse treatment planning for intensity modulated radiotherapy is analyzed and tested. Inverse treatment planning is based on the numerical solutions to the Fredholm integral equation of the first kind which is ill-posed. Therefore, a fundamental problem with inverse treatment planning is that it may exhibit instabilities manifested in nonphysical oscillations in the beam intensity functions. To control the instabilities, we consider a variational regularization technique which can be applied for the methods which minimize a quadratic objective function. In this technique, the quadratic objective function is modified by adding of a stabilizing functional that allows for arbitrary order regularization. An optimal form of stabilizing functional is selected which allows for both regularization and good approximation of beam intensity functions. The regularized optimization algorithm is shown, by comparison for a typical case of a head-and-neck cancer treatment, to be significantly more accurate and robust than the standard approach, particularly for the smaller beamlet sizes.

  20. Incorporating anatomical side information into PET reconstruction using nonlocal regularization.

    PubMed

    Nguyen, Van-Giang; Lee, Soo-Jin

    2013-10-01

    With the introduction of combined positron emission tomography (PET)/computed tomography (CT) or PET/magnetic resonance imaging (MRI) scanners, there is an increasing emphasis on reconstructing PET images with the aid of the anatomical side information obtained from X-ray CT or MRI scanners. In this paper, we propose a new approach to incorporating prior anatomical information into PET reconstruction using the nonlocal regularization method. The nonlocal regularizer developed for this application is designed to selectively consider the anatomical information only when it is reliable. As our proposed nonlocal regularization method does not directly use anatomical edges or boundaries which are often used in conventional methods, it is not only free from additional processes to extract anatomical boundaries or segmented regions, but also more robust to the signal mismatch problem that is caused by the indirect relationship between the PET image and the anatomical image. We perform simulations with digital phantoms. According to our experimental results, compared to the conventional method based on the traditional local regularization method, our nonlocal regularization method performs well even with the imperfect prior anatomical information or in the presence of signal mismatch between the PET image and the anatomical image.

  1. Alternating Direction Method of Multiplier for Tomography With Nonlocal Regularizers

    PubMed Central

    Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2015-01-01

    The ordered subset expectation maximization (OSEM) algorithm approximates the gradient of a likelihood function using a subset of projections instead of using all projections so that fast image reconstruction is possible for emission and transmission tomography such as SPECT, PET, and CT. However, OSEM does not significantly accelerate reconstruction with computationally expensive regularizers such as patch-based nonlocal (NL) regularizers, because the regularizer gradient is evaluated for every subset. We propose to use variable splitting to separate the likelihood term and the regularizer term for penalized emission tomographic image reconstruction problem and to optimize it using the alternating direction method of multiplier (ADMM). We also propose a fast algorithm to optimize the ADMM parameter based on convergence rate analysis. This new scheme enables more sub-iterations related to the likelihood term. We evaluated our ADMM for 3-D SPECT image reconstruction with a patch-based NL regularizer that uses the Fair potential function. Our proposed ADMM improved the speed of convergence substantially compared to other existing methods such as gradient descent, EM, and OSEM using De Pierro’s approach, and the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. PMID:25291351

  2. Regularization methods for near-field acoustical holography.

    PubMed

    Williams, E G

    2001-10-01

    The reconstruction of the pressure and normal surface velocity provided by near-field acoustical holography (NAH) from pressure measurements made near a vibrating structure is a linear, ill-posed inverse problem due to the existence of strongly decaying, evanescentlike waves. Regularization provides a technique of overcoming the ill-posedness and generates a solution to the linear problem in an automated way. We present four robust methods for regularization; the standard Tikhonov procedure along with a novel improved version, Landweber iteration, and the conjugate gradient approach. Each of these approaches can be applied to all forms of interior or exterior NAH problems; planar, cylindrical, spherical, and conformal. We also study two parameter selection procedures, the Morozov discrepancy principle and the generalized cross validation, which are crucial to any regularization theory. In particular, we concentrate here on planar and cylindrical holography. These forms of NAH which rely on the discrete Fourier transform are important due to their popularity and to their tremendous computational speed. In order to use regularization theory for the separable geometry problems we reformulate the equations of planar, cylindrical, and spherical NAH into an eigenvalue problem. The resulting eigenvalues and eigenvectors couple easily to regularization theory, which can be incorporated into the NAH software with little sacrifice in computational speed. The resulting complete automation of the NAH algorithm for both separable and nonseparable geometries overcomes the last significant hurdle for NAH.

  3. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  4. Regularized friction and continuation: Comparison with Coulomb's law

    NASA Astrophysics Data System (ADS)

    Vigué, Pierre; Vergez, Christophe; Karkar, Sami; Cochelin, Bruno

    2017-02-01

    Periodic solutions of systems with friction are difficult to investigate because of the non-smooth nature of friction laws. This paper examines periodic solutions and most notably stick-slip, on a simple one-degree-of-freedom system (mass, spring, damper, and belt), with Coulomb's friction law, and with a regularized friction law (i.e. the friction coefficient becomes a function of relative speed, with a stiffness parameter). With Coulomb's law, the stick-slip solution is constructed step by step, which gives a usable existence condition. With the regularized law, the Asymptotic Numerical Method and the Harmonic Balance Method provide bifurcation diagrams with respect to the belt speed or normal force, and for several values of the regularization parameter. Formulations from the Coulomb case give the means of a comparison between regularized solutions and a standard reference. With an appropriate definition, regularized stick-slip motion exists, its amplitude increases with respect to the belt speed and its pulsation decreases with respect to the normal force.

  5. Fast multislice fluorescence molecular tomography using sparsity-inducing regularization

    NASA Astrophysics Data System (ADS)

    Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba

    2016-02-01

    Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.

  6. Radial basis function networks and complexity regularization in function learning.

    PubMed

    Krzyzak, A; Linder, T

    1998-01-01

    In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.

  7. Manufacture of Regularly Shaped Sol-Gel Pellets

    NASA Technical Reports Server (NTRS)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  8. Context effects on orthographic learning of regular and irregular words.

    PubMed

    Wang, Hua-Chen; Castles, Anne; Nickels, Lyndsey; Nation, Kate

    2011-05-01

    The self-teaching hypothesis proposes that orthographic learning takes place via phonological decoding in meaningful texts, that is, in context. Context is proposed to be important in learning to read, especially when decoding is only partial. However, little research has directly explored this hypothesis. The current study looked at the effect of context on orthographic learning and examined whether there were different effects for novel words given regular and irregular pronunciations. Two experiments were conducted using regular and irregular novel words, respectively. Second-grade children were asked to learn eight novel words either in stories or in a list of words. The results revealed no significant effect of context for the regular items. However, in an orthographic decision task, there was a facilitatory effect of context on irregular novel word learning. The findings support the view that contextual information is important to orthographic learning, but only when the words to be learned contain irregular spelling-sound correspondences.

  9. Analysis of the "Learning in Regular Classrooms" movement in China.

    PubMed

    Deng, M; Manset, G

    2000-04-01

    The Learning in Regular Classrooms experiment has evolved in response to China's efforts to educate its large population of students with disabilities who, until the mid-1980s, were denied a free education. In the Learning in Regular Classrooms, students with disabilities (primarily sensory impairments or mild mental retardation) are educated in neighborhood schools in mainstream classrooms. Despite difficulties associated with developing effective inclusive programming, this approach has contributed to a major increase in the enrollment of students with disabilities and increased involvement of schools, teachers, and parents in China's newly developing special education system. Here we describe the development of the Learning in Regular Classroom approach and the challenges associated with educating students with disabilities in China.

  10. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  11. Breast ultrasound tomography with total-variation regularization

    SciTech Connect

    Huang, Lianjie; Li, Cuiping; Duric, Neb

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  12. Structural characterization of the packings of granular regular polygons

    NASA Astrophysics Data System (ADS)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  13. Gamma regularization based reconstruction for low dose CT.

    PubMed

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-09-07

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l0-norm and l1-norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms.

  14. Local conservative regularizations of compressible magnetohydrodynamic and neutral flows

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2016-02-01

    Ideal systems like magnetohydrodynamics (MHD) and Euler flow may develop singularities in vorticity ( w =∇×v ). Viscosity and resistivity provide dissipative regularizations of the singularities. In this paper, we propose a minimal, local, conservative, nonlinear, dispersive regularization of compressible flow and ideal MHD, in analogy with the KdV regularization of the 1D kinematic wave equation. This work extends and significantly generalizes earlier work on incompressible Euler and ideal MHD. It involves a micro-scale cutoff length λ which is a function of density, unlike in the incompressible case. In MHD, it can be taken to be of order the electron collisionless skin depth c/ωpe. Our regularization preserves the symmetries of the original systems and, with appropriate boundary conditions, leads to associated conservation laws. Energy and enstrophy are subject to a priori bounds determined by initial data in contrast to the unregularized systems. A Hamiltonian and Poisson bracket formulation is developed and applied to generalize the constitutive relation to bound higher moments of vorticity. A "swirl" velocity field is identified, and shown to transport w/ρ and B/ρ, generalizing the Kelvin-Helmholtz and Alfvén theorems. The steady regularized equations are used to model a rotating vortex, MHD pinch, and a plane vortex sheet. The proposed regularization could facilitate numerical simulations of fluid/MHD equations and provide a consistent statistical mechanics of vortices/current filaments in 3D, without blowup of enstrophy. Implications for detailed analyses of fluid and plasma dynamic systems arising from our work are briefly discussed.

  15. Zigzag stacks and m-regular linear stacks.

    PubMed

    Chen, William Y C; Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2014-12-01

    The contact map of a protein fold is a graph that represents the patterns of contacts in the fold. It is known that the contact map can be decomposed into stacks and queues. RNA secondary structures are special stacks in which the degree of each vertex is at most one and each arc has length of at least two. Waterman and Smith derived a formula for the number of RNA secondary structures of length n with exactly k arcs. Höner zu Siederdissen et al. developed a folding algorithm for extended RNA secondary structures in which each vertex has maximum degree two. An equation for the generating function of extended RNA secondary structures was obtained by Müller and Nebel by using a context-free grammar approach, which leads to an asymptotic formula. In this article, we consider m-regular linear stacks, where each arc has length at least m and the degree of each vertex is bounded by two. Extended RNA secondary structures are exactly 2-regular linear stacks. For any m ≥ 2, we obtain an equation for the generating function of the m-regular linear stacks. For given m, we deduce a recurrence relation and an asymptotic formula for the number of m-regular linear stacks on n vertices. To establish the equation, we use the reduction operation of Chen, Deng, and Du to transform an m-regular linear stack to an m-reduced zigzag (or alternating) stack. Then we find an equation for m-reduced zigzag stacks leading to an equation for m-regular linear stacks.

  16. Diffraction of a shock wave by a compression corner; regular and single Mach reflection

    NASA Technical Reports Server (NTRS)

    Vijayashankar, V. S.; Kutler, P.; Anderson, D.

    1976-01-01

    The two dimensional, time dependent Euler equations which govern the flow field resulting from the injection of a planar shock with a compression corner are solved with initial conditions that result in either regular reflection or single Mach reflection of the incident planar shock. The Euler equations which are hyperbolic are transformed to include the self similarity of the problem. A normalization procedure is employed to align the reflected shock and the Mach stem as computational boundaries to implement the shock fitting procedure. A special floating fitting scheme is developed in conjunction with the method of characteristics to fit the slip surface. The reflected shock, the Mach stem, and the slip surface are all treated as harp discontinuities, thus, resulting in a more accurate description of the inviscid flow field. The resulting numerical solutions are compared with available experimental data and existing first-order, shock-capturing numerical solutions.

  17. Regular network model for the sea ice-albedo feedback in the Arctic.

    PubMed

    Müller-Stoffels, Marc; Wackerbauer, Renate

    2011-03-01

    The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.

  18. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    NASA Astrophysics Data System (ADS)

    Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan

    2015-05-01

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.

  19. Regularized discriminant analysis for multi-sensor decision fusion and damage detection with Lamb waves

    NASA Astrophysics Data System (ADS)

    Mishra, Spandan; Vanli, O. Arda; Huffer, Fred W.; Jung, Sungmoon

    2016-04-01

    In this study we propose a regularized linear discriminant analysis approach for damage detection which does not require an intermediate feature extraction step and therefore more efficient in handling data with high-dimensionality. A robust discriminant model is obtained by shrinking of the covariance matrix to a diagonal matrix and thresholding redundant predictors without hurting the predictive power of the model. The shrinking and threshold parameters of the discriminant function (decision boundary) are estimated to minimize the classification error. Furthermore, it is shown how the damage classification achieved by the proposed method can be extended to multiple sensors by following a Bayesian decision-fusion formulation. The detection probability of each sensor is used as a prior condition to estimate the posterior detection probability of the entire network and the posterior detection probability is used as a quantitative basis to make the final decision about the damage.

  20. REGULARIZED 3D FUNCTIONAL REGRESSION FOR BRAIN IMAGE DATA VIA HAAR WAVELETS.

    PubMed

    Wang, Xuejing; Nan, Bin; Zhu, Ji; Koeppe, Robert

    2014-06-01

    The primary motivation and application in this article come from brain imaging studies on cognitive impairment in elderly subjects with brain disorders. We propose a regularized Haar wavelet-based approach for the analysis of three-dimensional brain image data in the framework of functional data analysis, which automatically takes into account the spatial information among neighboring voxels. We conduct extensive simulation studies to evaluate the prediction performance of the proposed approach and its ability to identify related regions to the outcome of interest, with the underlying assumption that only few relatively small subregions are truly predictive of the outcome of interest. We then apply the proposed approach to searching for brain subregions that are associated with cognition using PET images of patients with Alzheimer's disease, patients with mild cognitive impairment, and normal controls.

  1. Regularized quantile regression under heterogeneous sparsity with application to quantitative genetic traits

    PubMed Central

    He, Qianchuan; Kong, Linglong; Wang, Yanhua; Wang, Sijian; Chan, Timothy A.; Holland, Eric

    2016-01-01

    Genetic studies often involve quantitative traits. Identifying genetic features that influence quantitative traits can help to uncover the etiology of diseases. Quantile regression method considers the conditional quantiles of the response variable, and is able to characterize the underlying regression structure in a more comprehensive manner. On the other hand, genetic studies often involve high-dimensional genomic features, and the underlying regression structure may be heterogeneous in terms of both effect sizes and sparsity. To account for the potential genetic heterogeneity, including the heterogeneous sparsity, a regularized quantile regression method is introduced. The theoretical property of the proposed method is investigated, and its performance is examined through a series of simulation studies. A real dataset is analyzed to demonstrate the application of the proposed method. PMID:28133403

  2. Spatial-temporal total variation regularization (STTVR) for 4D-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Haibo; Maier, Andreas; Fahrig, Rebecca; Hornegger, Joachim

    2012-03-01

    Four dimensional computed tomography (4D-CT) is very important for treatment planning in thorax or abdomen area, e.g. for guiding radiation therapy planning. The respiratory motion makes the reconstruction problem illposed. Recently, compressed sensing theory was introduced. It uses sparsity as a prior to solve the problem and improves image quality considerably. However, the images at each phase are reconstructed individually. The correlations between neighboring phases are not considered in the reconstruction process. In this paper, we propose the spatial-temporal total variation regularization (STTVR) method which not only employs the sparsity in the spatial domain but also in the temporal domain. The algorithm is validated with XCAT thorax phantom. The Euclidean norm of the reconstructed image and ground truth is calculated for evaluation. The results indicate that our method improves the reconstruction quality by more than 50% compared to standard ART.

  3. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    SciTech Connect

    Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan

    2015-05-15

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.

  4. Mathematical Model and Simulation of Particle Flow around Choanoflagellates Using the Method of Regularized Stokeslets

    NASA Astrophysics Data System (ADS)

    Nararidh, Niti

    2013-11-01

    Choanoflagellates are unicellular organisms whose intriguing morphology includes a set of collars/microvilli emanating from the cell body, surrounding the beating flagellum. We investigated the role of the microvilli in the feeding and swimming behavior of the organism using a three-dimensional model based on the method of regularized Stokeslets. This model allows us to examine the velocity generated around the feeding organism tethered in place, as well as to predict the paths of surrounding free flowing particles. In particular, we can depict the effective capture of nutritional particles and bacteria in the fluid, showing the hydrodynamic cooperation between the cell, flagellum, and microvilli of the organism. Funding Source: Murchison Undergraduate Research Fellowship.

  5. UV radiation transmittance: regular clothing versus sun-protective clothing.

    PubMed

    Bielinski, Kenneth; Bielinski, Nolan

    2014-09-01

    There are many clothing options available for patients who are interested in limiting their exposure to UV radiation; however, these options can be confusing for patients. For dermatologists, there is limited clinical data regarding the advantages, if any, of sun-protective clothing. In this study, we examined the UV radiation transmittance of regular clothing versus sun-protective clothing. We found that regular clothing may match or even exceed sun-protective clothing in blocking the transmittance of UV radiation. These data will help dermatologists better counsel their patients on clothing options for sun protection.

  6. Construction of regular black holes in general relativity

    NASA Astrophysics Data System (ADS)

    Fan, Zhong-Ying; Wang, Xiaobao

    2016-12-01

    We present a general procedure for constructing exact black hole solutions with electric or magnetic charges in general relativity coupled to a nonlinear electrodynamics. We obtain a variety of two-parameter family spherically symmetric black hole solutions. In particular, the singularity at the center of the space-time can be canceled in the parameter space and the black hole solutions become regular everywhere in space-time. We study the global properties of the solutions and derive the first law of thermodynamics. We also generalize the procedure to include a cosmological constant and construct regular black hole solutions that are asymptotic to anti-de Sitter space-time.

  7. Regular bouncing cosmological solutions in effective actions in four dimensions

    NASA Astrophysics Data System (ADS)

    Constantinidis, C. P.; Fabris, J. C.; Furtado, R. G.; Picco, M.

    2000-02-01

    We study cosmological scenarios resulting from effective actions in four dimensions which are, under some assumptions, connected with multidimensional, supergravity and string theories. These effective actions are labeled by the parameters ω, the dilaton coupling constant, and n which establishes the coupling between the dilaton and a scalar field originating from the gauge field existing in the original theories. There is a large class of bouncing as well as Friedmann-like solutions. We investigate under which conditions bouncing regular solutions can be obtained. In the case of the string effective action, regularity is obtained through the inclusion of contributions from the Ramond-Ramond sector of superstring.

  8. The structure of split regular BiHom-Lie algebras

    NASA Astrophysics Data System (ADS)

    Calderón, Antonio J.; Sánchez, José M.

    2016-12-01

    We introduce the class of split regular BiHom-Lie algebras as the natural extension of the one of split Hom-Lie algebras and so of split Lie algebras. We show that an arbitrary split regular BiHom-Lie algebra L is of the form L = U +∑jIj with U a linear subspace of a fixed maximal abelian subalgebra H and any Ij a well described (split) ideal of L, satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.

  9. One-way regular electromagnetic mode immune to backscattering.

    PubMed

    Deng, Xiaohua; Hong, Lujun; Zheng, Xiaodong; Shen, Linfang

    2015-05-10

    In this paper, we present a basic model of robust one-way electromagnetic modes at microwave frequencies, which is formed by a semi-infinite gyromagnetic yttrium-iron-garnet with dielectric cladding terminated by a metal plate. It is shown that this system supports not only one-way surface magnetoplasmons (SMPs) but also a one-way regular mode, which is guided by the mechanism of total internal reflection. Like one-way SMPs, the one-way regular mode can be immune to backscattering, and two types of one-way modes together make up a complete dispersion band for the system.

  10. Adding Asymmetrically Dominated Alternatives: Violations of Regularity & the Similarity Hypothesis.

    DTIC Science & Technology

    1981-07-01

    statistically significant ( McNemar Test, Siegel 1956) at a p. < 0.05 level. Technically, however, the test of regularity should code switching to the de- coy as...who switched between target and competitor 63% switched to the target, 37% to the competitor McNemar Test: =(28 )/109 = 7.2 , p4.05 4. Grouping those...who switched to the decoy with the competitor (for a strong test of regularity) 59% switched to the target while 41% switched away McNemar Test: l

  11. Recursive support vector machines for dimensionality reduction.

    PubMed

    Tao, Qing; Chu, Dejun; Wang, Jue

    2008-01-01

    The usual dimensionality reduction technique in supervised learning is mainly based on linear discriminant analysis (LDA), but it suffers from singularity or undersampled problems. On the other hand, a regular support vector machine (SVM) separates the data only in terms of one single direction of maximum margin, and the classification accuracy may be not good enough. In this letter, a recursive SVM (RSVM) is presented, in which several orthogonal directions that best separate the data with the maximum margin are obtained. Theoretical analysis shows that a completely orthogonal basis can be derived in feature subspace spanned by the training samples and the margin is decreasing along the recursive components in linearly separable cases. As a result, a new dimensionality reduction technique based on multilevel maximum margin components and then a classifier with high accuracy are achieved. Experiments in synthetic and several real data sets show that RSVM using multilevel maximum margin features can do efficient dimensionality reduction and outperform regular SVM in binary classification problems.

  12. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    PubMed

    Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.

  13. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  14. Minimum divergence viscous flow simulation through finite difference and regularization techniques

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Mirabolghasemi, Maryam; Bryant, Steven L.; Prodanović, Maša

    2016-09-01

    We develop a new algorithm to simulate single- and two-phase viscous flow through a three-dimensional Cartesian representation of the porous space, such as those available through X-ray microtomography. We use the finite difference method to discretize the governing equations and also propose a new method to enforce the incompressible flow constraint under zero Neumann boundary conditions for the velocity components. Finite difference formulation leads to fast parallel implementation through linear solvers for sparse matrices, allowing relatively fast simulations, while regularization techniques used on solving inverse problems lead to the desired incompressible fluid flow. Tests performed using benchmark samples show good agreement with experimental/theoretical values. Additional tests are run on Bentheimer and Buff Berea sandstone samples with available laboratory measurements. We compare the results from our new method, based on finite differences, with an open source finite volume implementation as well as experimental results, specifically to evaluate the benefits and drawbacks of each method. Finally, we calculate relative permeability by using this modified finite difference technique together with a level set based algorithm for multi-phase fluid distribution in the pore space. To our knowledge this is the first time regularization techniques are used in combination with finite difference fluid flow simulations.

  15. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM.

  16. Nonlinear run-ups of regular waves on sloping structures

    NASA Astrophysics Data System (ADS)

    Hsu, T.-W.; Liang, S.-J.; Young, B.-D.; Ou, S.-H.

    2012-12-01

    For coastal risk mapping, it is extremely important to accurately predict wave run-ups since they influence overtopping calculations; however, nonlinear run-ups of regular waves on sloping structures are still not accurately modeled. We report the development of a high-order numerical model for regular waves based on the second-order nonlinear Boussinesq equations (BEs) derived by Wei et al. (1995). We calculated 160 cases of wave run-ups of nonlinear regular waves over various slope structures. Laboratory experiments were conducted in a wave flume for regular waves propagating over three plane slopes: tan α =1/5, 1/4, and 1/3. The numerical results, laboratory observations, as well as previous datasets were in good agreement. We have also proposed an empirical formula of the relative run-up in terms of two parameters: the Iribarren number ξ and sloping structures tan α. The prediction capability of the proposed formula was tested using previous data covering the range ξ ≤ 3 and 1/5 ≤ tan α ≤ 1/2 and found to be acceptable. Our study serves as a stepping stone to investigate run-up predictions for irregular waves and more complex geometries of coastal structures.

  17. Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing

    ERIC Educational Resources Information Center

    Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.

    2012-01-01

    Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…

  18. 75 FR 1057 - Farm Credit Administration Board; Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... [Federal Register Volume 75, Number 5 (Friday, January 8, 2010)] [Notices] [Page 1057] [FR Doc No: 2010-246] FARM CREDIT ADMINISTRATION Farm Credit Administration Board; Regular Meeting AGENCY: Farm Credit Administration. SUMMARY: Notice is hereby given, pursuant to the Government in the Sunshine Act...

  19. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  20. Regularized Partial and/or Constrained Redundancy Analysis

    ERIC Educational Resources Information Center

    Takane, Yoshio; Jung, Sunho

    2008-01-01

    Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time,…

  1. An Interesting Lemma for Regular C-fractions

    NASA Astrophysics Data System (ADS)

    Chen, Kwang-Wu

    2003-12-01

    In this short note we give an interesting lemma for regular C-fractions. Applying this lemma we obtain some congruence properties of some classical numbers such as the Springer numbers of even index, the median Euler numbers, the median Genocchi numbers, and the tangent numbers.

  2. Psychological Benefits of Regular Physical Activity: Evidence from Emerging Adults

    ERIC Educational Resources Information Center

    Cekin, Resul

    2015-01-01

    Emerging adulthood is a transitional stage between late adolescence and young adulthood in life-span development that requires significant changes in people's lives. Therefore, identifying protective factors for this population is crucial. This study investigated the effects of regular physical activity on self-esteem, optimism, and happiness in…

  3. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  4. Regularization of open superstring from orientable closed surface

    SciTech Connect

    Frampton, P.H.; Kshirsagar, A.K.; Ng, Y.J.

    1986-10-15

    By tracing the one-loop annulus and Moebius diagrams to a common origin, as integration contours on a torus, the principal-part regularization of the open superstring is given some justification. The result hints at the possibility of a simple topological expansion for open superstrings.

  5. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable...

  6. Regularization in Short-Term Memory for Serial Order

    ERIC Educational Resources Information Center

    Botvinick, Matthew; Bylsma, Lauren M.

    2005-01-01

    Previous research has shown that short-term memory for serial order can be influenced by background knowledge concerning regularities of sequential structure. Specifically, it has been shown that recall is superior for sequences that fit well with familiar sequencing constraints. The authors report a corresponding effect pertaining to serial…

  7. Maximal regularity for perturbed integral equations on periodic Lebesgue spaces

    NASA Astrophysics Data System (ADS)

    Lizama, Carlos; Poblete, Verónica

    2008-12-01

    We characterize the maximal regularity of periodic solutions for an additive perturbed integral equation with infinite delay in the vector-valued Lebesgue spaces. Our method is based on operator-valued Fourier multipliers. We also study resonances, characterizing the existence of solutions in terms of a compatibility condition on the forcing term.

  8. Implicit Learning of L2 Word Stress Regularities

    ERIC Educational Resources Information Center

    Chan, Ricky K. W.; Leung, Janny H. C.

    2014-01-01

    This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…

  9. Elementary Teachers' Perspectives of Inclusion in the Regular Education Classroom

    ERIC Educational Resources Information Center

    Olinger, Becky Lorraine

    2013-01-01

    The purpose of this qualitative study was to examine regular education and special education teacher perceptions of inclusion services in an elementary school setting. In this phenomenological study, purposeful sampling techniques and data were used to conduct a study of inclusion in the elementary schools. In-depth one-to-one interviews with 8…

  10. 29 CFR 778.408 - The specified regular rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS OVERTIME COMPENSATION Exceptions From the Regular Rate Principles Guaranteed Compensation Which Includes Overtime Pay § 778.408 The specified... reasonably be expected to be operative in controlling the employee's compensation. (c) The rate specified...

  11. New Technologies in Portugal: Regular Middle and High School

    ERIC Educational Resources Information Center

    Florentino, Teresa; Sanchez, Lucas; Joyanes, Luis

    2010-01-01

    Purpose: The purpose of this paper is to elaborate upon the relation between information and communication technologies (ICT), particularly web-based resources, and their use, programs and learning in Portuguese middle and high regular public schools. Design/methodology/approach: Adding collected documentation on curriculum, laws and other related…

  12. Rotating bearings in regular and irregular granular shear packings.

    PubMed

    Aström, J A

    2008-01-01

    For 2D regular dense packings of solid mono-size non-sliding disks there is a mechanism for bearing formation under shear that can be explained theoretically. There is, however, no easy way to extend this model to include random dense packings which would better describe natural packings. A numerical model that simulates shear deformation for both near-regular and irregular packings is used to demonstrate that rotating bearings appear roughly with the same density in random and regular packings. The main difference appears in the size distribution of the rotating clusters near the jamming threshold. The size distribution is well described by a scaling form with a large-size cut-off that seems to grow without bounds for regular packings at the jamming threshold, while it remains finite for irregular packings. At packing densities above the jamming transition there can be no shear, unless the disks are allowed to break. Breaking of disks induces a large number of small local bearings. Clusters of rotating particles may contribute to e.g. pre-rupture yielding in landslides, snow avalanches and to the formation of aseismic gaps in tectonic fault zones.

  13. Autocorrelation and Regularization of Query-Based Information Retrieval Scores

    DTIC Science & Technology

    2008-02-01

    projected scores. This problem has similar solutions to monolingual regularization. The iterative solution is, ft+1t = (1− α)yt + αStf t t (8.7) The... multilingual corpora. In Manuela M. Veloso, editor, IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial In- telligence

  14. Identifying Basketball Performance Indicators in Regular Season and Playoff Games

    PubMed Central

    García, Javier; Ibáñez, Sergio J.; De Santos, Raúl Martinez; Leite, Nuno; Sampaio, Jaime

    2013-01-01

    The aim of the present study was to identify basketball game performance indicators which best discriminate winners and losers in regular season and playoffs. The sample used was composed by 323 games of ACB Spanish Basketball League from the regular season (n=306) and from the playoffs (n=17). A previous cluster analysis allowed splitting the sample in balanced (equal or below 12 points), unbalanced (between 13 and 28 points) and very unbalanced games (above 28 points). A discriminant analysis was used to identify the performance indicators either in regular season and playoff games. In regular season games, the winning teams dominated in assists, defensive rebounds, successful 2 and 3-point field-goals. However, in playoff games the winning teams’ superiority was only in defensive rebounding. In practical applications, these results may help the coaches to accurately design training programs to reflect the importance of having different offensive set plays and also have specific conditioning programs to prepare for defensive rebounding. PMID:23717365

  15. Analysis of Tikhonov regularization for function approximation by neural networks.

    PubMed

    Burger, Martin; Neubauer, Andreas

    2003-01-01

    This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feed-forward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing and regularization by weight decay, as well as a combination of both methods to combine their advantages. We show that in all cases stable approximations are obtained converging to the approximated function in a desired Sobolev space as the noise in the data tends to zero (in the weaker L(2)-norm) if the regularization parameter and the number of units in the network are chosen appropriately. Under additional smoothness assumptions we are able to show convergence rates results in terms of the noise level and the number of units in the network. In addition, we show how the theoretical results can be applied to the important classes of perceptrons with one hidden layer and to translation networks. Finally, the performance of the different approaches is compared in some numerical examples.

  16. New vision based navigation clue for a regular colonoscope's tip

    NASA Astrophysics Data System (ADS)

    Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy

    2009-02-01

    Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.

  17. Interrupting Sitting Time with Regular Walks Attenuates Postprandial Triglycerides.

    PubMed

    Miyashita, M; Edamoto, K; Kidokoro, T; Yanaoka, T; Kashiwabara, K; Takahashi, M; Burns, S

    2016-02-01

    We compared the effects of prolonged sitting with the effects of sitting interrupted by regular walking and the effects of prolonged sitting after continuous walking on postprandial triglyceride in postmenopausal women. 15 participants completed 3 trials in random order: 1) prolonged sitting, 2) regular walking, and 3) prolonged sitting preceded by continuous walking. During the sitting trial, participants rested for 8 h. For the walking trials, participants walked briskly in either twenty 90-sec bouts over 8 h or one 30-min bout in the morning (09:00-09:30). Except for walking, both exercise trials mimicked the sitting trial. In each trial, participants consumed a breakfast (08:00) and lunch (11:00). Blood samples were collected in the fasted state and at 2, 4, 6 and 8 h after breakfast. The serum triglyceride incremental area under the curve was 15 and 14% lower after regular walking compared with prolonged sitting and prolonged sitting after continuous walking (4.73±2.50 vs. 5.52±2.95 vs. 5.50±2.59 mmol/L∙8 h respectively, main effect of trial: P=0.023). Regularly interrupting sitting time with brief bouts of physical activity can reduce postprandial triglyceride in postmenopausal women.

  18. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, G.C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  19. Involving Impaired, Disabled, and Handicapped Persons in Regular Camp Programs.

    ERIC Educational Resources Information Center

    American Alliance for Health, Physical Education, and Recreation, Washington, DC. Information and Research Utilization Center.

    The publication provides some broad guidelines for serving impaired, disabled, and handicapped children in nonspecialized or regular day and residential camps. Part One on the rationale and basis for integrated camping includes three chapters which cover mainstreaming and the normalization principle, the continuum of services (or Cascade System)…

  20. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  1. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  2. The Student with Albinism in the Regular Classroom.

    ERIC Educational Resources Information Center

    Ashley, Julia Robertson

    This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…

  3. Simulated Administration of a Regular Guidance Operation (SARGO).

    ERIC Educational Resources Information Center

    Fredrickson, Ronald H.; Popken, Charles F.

    Simulated Administration of a Regular Guidance Operation (SARGO) is a program for the training of directors of guidance and pupil personnel services. The objective of SARGO is to prepare directors of guidance services to: (1) prepare a written description of a pupil personnel program; (2) interact with a school administrator to clarify role…

  4. Rotating bearings in regular and irregular granular shear packings

    NASA Astrophysics Data System (ADS)

    Ström, J. A. Ã.

    2008-01-01

    For 2D regular dense packings of solid mono-size non-sliding disks there is a mechanism for bearing formation under shear that can be explained theoretically. There is, however, no easy way to extend this model to include random dense packings which would better describe natural packings. A numerical model that simulates shear deformation for both near-regular and irregular packings is used to demonstrate that rotating bearings appear roughly with the same density in random and regular packings. The main difference appears in the size distribution of the rotating clusters near the jamming threshold. The size distribution is well described by a scaling form with a large-size cut-off that seems to grow without bounds for regular packings at the jamming threshold, while it remains finite for irregular packings. At packing densities above the jamming transition there can be no shear, unless the disks are allowed to break. Breaking of disks induces a large number of small local bearings. Clusters of rotating particles may contribute to e.g. pre-rupture yielding in landslides, snow avalanches and to the formation of aseismic gaps in tectonic fault zones.

  5. Nonnative Processing of Verbal Morphology: In Search of Regularity

    ERIC Educational Resources Information Center

    Gor, Kira; Cook, Svetlana

    2010-01-01

    There is little agreement on the mechanisms involved in second language (L2) processing of regular and irregular inflectional morphology and on the exact role of age, amount, and type of exposure to L2 resulting in differences in L2 input and use. The article contributes to the ongoing debates by reporting the results of two experiments on Russian…

  6. Regularity and Energy Conservation for the Compressible Euler Equations

    NASA Astrophysics Data System (ADS)

    Feireisl, Eduard; Gwiazda, Piotr; Świerczewska-Gwiazda, Agnieszka; Wiedemann, Emil

    2017-03-01

    We give sufficient conditions on the regularity of solutions to the inhomogeneous incompressible Euler and the compressible isentropic Euler systems in order for the energy to be conserved. Our strategy relies on commutator estimates similar to those employed by Constantin et al. for the homogeneous incompressible Euler equations.

  7. Preverbal Infants Infer Intentional Agents from the Perception of Regularity

    ERIC Educational Resources Information Center

    Ma, Lili; Xu, Fei

    2013-01-01

    Human adults have a strong bias to invoke intentional agents in their intuitive explanations of ordered wholes or regular compositions in the world. Less is known about the ontogenetic origin of this bias. In 4 experiments, we found that 9-to 10-month-old infants expected a human hand, but not a mechanical tool with similar affordances, to be the…

  8. Low thrust space vehicle trajectory optimization using regularized variables

    NASA Technical Reports Server (NTRS)

    Schwenzfeger, K. J.

    1974-01-01

    Optimizing the trajectory of a low thrust space vehicle usually means solving a nonlinear two point boundary value problem. In general, accuracy requirements necessitate extensive computation times. In celestial mechanics, regularizing transformations of the equations of motion are used to eliminate computational and analytical problems that occur during close approaches to gravitational force centers. It was shown in previous investigations that regularization in the formulation of the trajectory optimization problem may reduce the computation time. In this study, a set of regularized equations describing the optimal trajectory of a continuously thrusting space vehicle is derived. The computational characteristics of the set are investigated and compared to the classical Newtonian unregularized set of equations. The comparison is made for low thrust, minimum time, escape trajectories and numerical calculations of Keplerian orbits. The comparison indicates that in the cases investigated for bad initial guesses of the known boundary values a remarkable reduction in the computation time was achieved. Furthermore, the investigated set of regularized equations shows high numerical stability even for long duration flights and is less sensitive to errors in the guesses of the unknown boundary values.

  9. Mainstreaming: Educable Mentally Retarded Children in Regular Classes.

    ERIC Educational Resources Information Center

    Birch, Jack W.

    Described in the monograph are mainstreaming programs for educable mentally retarded (EMR) children in six variously sized school districts within five states. It is noted that mainstreaming is based on the principle of educating most children in the regular classroom and providing special education on the basis of learning needs rather than…

  10. Regular Class Participation System (RCPS). A Final Report.

    ERIC Educational Resources Information Center

    Ferguson, Dianne L.; And Others

    The Regular Class Participation System (RCPS) project attempted to develop, implement, and validate a system for placing and maintaining students with severe disabilities in general education classrooms, with a particular emphasis on achieving both social and learning outcomes for students. A teacher-based planning strategy was developed and…

  11. The Visually Impaired Student in the Regular Classroom.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    The guide provides strategies for regular teachers to use with visually impaired (VI) students in the province of Alberta, Canada. After an introduction, definitions of terms such as "adventitiously blind" are presented. Next addressed are effects of visual impairment on cognitive development, emotional and social aspects, and…

  12. Identifying and Exploiting Spatial Regularity in Data Memory References

    SciTech Connect

    Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A; Schulz, M

    2003-07-24

    The growing processor/memory performance gap causes the performance of many codes to be limited by memory accesses. If known to exist in an application, strided memory accesses forming streams can be targeted by optimizations such as prefetching, relocation, remapping, and vector loads. Undetected, they can be a significant source of memory stalls in loops. Existing stream-detection mechanisms either require special hardware, which may not gather statistics for subsequent analysis, or are limited to compile-time detection of array accesses in loops. Formally, little treatment has been accorded to the subject; the concept of locality fails to capture the existence of streams in a program's memory accesses. The contributions of this paper are as follows. First, we define spatial regularity as a means to discuss the presence and effects of streams. Second, we develop measures to quantify spatial regularity, and we design and implement an on-line, parallel algorithm to detect streams - and hence regularity - in running applications. Third, we use examples from real codes and common benchmarks to illustrate how derived stream statistics can be used to guide the application of profile-driven optimizations. Overall, we demonstrate the benefits of our novel regularity metric as a low-cost instrument to detect potential for code optimizations affecting memory performance.

  13. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  14. Cost Effectiveness of Premium Versus Regular Gasoline in MCPS Buses.

    ERIC Educational Resources Information Center

    Baacke, Clifford M.; Frankel, Steven M.

    The primary question posed in this study is whether premium or regular gasoline is more cost effective for the Montgomery County Public School (MCPS) bus fleet, as a whole, when miles-per-gallon, cost-per-gallon, and repair costs associated with mileage are considered. On average, both miles-per-gallon, and repair costs-per-mile favor premium…

  15. Regular Strongly Typical Blocks of {mathcal{O}^{mathfrak {q}}}

    NASA Astrophysics Data System (ADS)

    Frisk, Anders; Mazorchuk, Volodymyr

    2009-10-01

    We use the technique of Harish-Chandra bimodules to prove that regular strongly typical blocks of the category {mathcal{O}} for the queer Lie superalgebra {mathfrak{q}_n} are equivalent to the corresponding blocks of the category {mathcal{O}} for the Lie algebra {mathfrak {gl}_n}.

  16. Adult Regularization of Inconsistent Input Depends on Pragmatic Factors

    ERIC Educational Resources Information Center

    Perfors, Amy

    2016-01-01

    In a variety of domains, adults who are given input that is only partially consistent do not discard the inconsistent portion (regularize) but rather maintain the probability of consistent and inconsistent portions in their behavior (probability match). This research investigates the possibility that adults probability match, at least in part,…

  17. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study.

  18. The Hearing Impaired Student in the Regular Classroom.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    The guide provides strategies for teachers to use with deaf and hearing impaired (HI) students in regular classrooms in the province of Alberta, Canada. An introductory section includes symptoms of a suspected hearing loss and a sample audiogram to aid teachers in recognizing the problem. Ways to meet special needs at different age levels are…

  19. The Physically/Medically Handicapped Student in the Regular Classroom.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    The guide outlines modifications, adaptations, and social interaction approaches for school staff to use with physically handicapped and regular students in integrated classrooms in the province of Alberta, Canada. Guidelines are provided for the following main categories and subsets (in parentheses): lifting and transferring techniques (methods…

  20. The properties of probabilistic simple regular sticker system

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Fong, Wan Heng; Sarmin, Nor Haniza; Turaev, Sherzod

    2015-10-01

    A mathematical model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, has been introduced in 1998. In sticker system, the sticker operation is based on the Watson-Crick complementary feature of DNA molecules. The computation of sticker system starts from an incomplete double-stranded sequence. Then by iterative sticking operations, a complete double-stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rule (including the simple regular sticker system) generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of the languages generated by the sticker systems. In this paper, we study the properties of probabilistic simple regular sticker systems. In this variant of sticker system, probabilities are associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings. The language are selected according to some probabilistic requirements. We prove that the probabilistic enhancement increases the computational power of simple regular sticker systems.

  1. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  2. New Algorithms and Sparse Regularization for Synthetic Aperture Radar Imaging

    DTIC Science & Technology

    2015-10-26

    Demanet Department of Mathematics Massachusetts Institute of Technology. • Grant title: New Algorithms and Sparse Regularization for Synthetic Aperture...statistical analysis of one such method, the so-called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies...called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies the scaling of the phase transition

  3. Regular and homeward travel speeds of arctic wolves

    USGS Publications Warehouse

    Mech, L.D.

    1994-01-01

    Single wolves (Canis lupus arctos), a pair, and a pack of five habituated to the investigator on an all-terrain vehicle were followed on Ellesmere Island, Northwest Territories, Canada, during summer. Their mean travel speed was measured on barren ground at 8.7 km/h during regular travel and 10.0 km/h when returning to a den.

  4. Sparse regularization techniques provide novel insights into outcome integration processes.

    PubMed

    Mohr, Holger; Wolfensteller, Uta; Frimmel, Steffi; Ruge, Hannes

    2015-01-01

    By exploiting information that is contained in the spatial arrangement of neural activations, multivariate pattern analysis (MVPA) can detect distributed brain activations which are not accessible by standard univariate analysis. Recent methodological advances in MVPA regularization techniques have made it feasible to produce sparse discriminative whole-brain maps with highly specific patterns. Furthermore, the most recent refinement, the Graph Net, explicitly takes the 3D-structure of fMRI data into account. Here, these advanced classification methods were applied to a large fMRI sample (N=70) in order to gain novel insights into the functional localization of outcome integration processes. While the beneficial effect of differential outcomes is well-studied in trial-and-error learning, outcome integration in the context of instruction-based learning has remained largely unexplored. In order to examine neural processes associated with outcome integration in the context of instruction-based learning, two groups of subjects underwent functional imaging while being presented with either differential or ambiguous outcomes following the execution of varying stimulus-response instructions. While no significant univariate group differences were found in the resulting fMRI dataset, L1-regularized (sparse) classifiers performed significantly above chance and also clearly outperformed the standard L2-regularized (dense) Support Vector Machine on this whole-brain between-subject classification task. Moreover, additional L2-regularization via the Elastic Net and spatial regularization by the Graph Net improved interpretability of discriminative weight maps but were accompanied by reduced classification accuracies. Most importantly, classification based on sparse regularization facilitated the identification of highly specific regions differentially engaged under ambiguous and differential outcome conditions, comprising several prefrontal regions previously associated with

  5. A simple way to measure daily lifestyle regularity

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.

    2002-01-01

    A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5

  6. A theoretical foundation for multi-scale regular vegetation patterns.

    PubMed

    Tarnita, Corina E; Bonachela, Juan A; Sheffer, Efrat; Guyton, Jennifer A; Coverdale, Tyler C; Long, Ryan A; Pringle, Robert M

    2017-01-18

    Self-organized regular vegetation patterns are widespread and thought to mediate ecosystem functions such as productivity and robustness, but the mechanisms underlying their origin and maintenance remain disputed. Particularly controversial are landscapes of overdispersed (evenly spaced) elements, such as North American Mima mounds, Brazilian murundus, South African heuweltjies, and, famously, Namibian fairy circles. Two competing hypotheses are currently debated. On the one hand, models of scale-dependent feedbacks, whereby plants facilitate neighbours while competing with distant individuals, can reproduce various regular patterns identified in satellite imagery. Owing to deep theoretical roots and apparent generality, scale-dependent feedbacks are widely viewed as a unifying and near-universal principle of regular-pattern formation despite scant empirical evidence. On the other hand, many overdispersed vegetation patterns worldwide have been attributed to subterranean ecosystem engineers such as termites, ants, and rodents. Although potentially consistent with territorial competition, this interpretation has been challenged theoretically and empirically and (unlike scale-dependent feedbacks) lacks a unifying dynamical theory, fuelling scepticism about its plausibility and generality. Here we provide a general theoretical foundation for self-organization of social-insect colonies, validated using data from four continents, which demonstrates that intraspecific competition between territorial animals can generate the large-scale hexagonal regularity of these patterns. However, this mechanism is not mutually exclusive with scale-dependent feedbacks. Using Namib Desert fairy circles as a case study, we present field data showing that these landscapes exhibit multi-scale patterning-previously undocumented in this system-that cannot be explained by either mechanism in isolation. These multi-scale patterns and other emergent properties, such as enhanced resistance to

  7. Semi-regular biorthogonal pairs and generalized Riesz bases

    NASA Astrophysics Data System (ADS)

    Inoue, H.

    2016-11-01

    In this paper we introduce general theories of semi-regular biorthogonal pairs, generalized Riesz bases and its physical applications. Here we deal with biorthogonal sequences {ϕn} and {ψn} in a Hilbert space H , with domains D ( ϕ ) = { x ∈ H ; ∑ k = 0 ∞ |" separators=" ( x | ϕ k ) | 2 < ∞ } and D ( ψ ) = { x ∈ H ; ∑ k = 0 ∞ |" separators=" ( x | ψ k ) | 2 < ∞ } and linear spans Dϕ ≡ Span{ϕn} and Dψ ≡ Span{ψn}. A biorthogonal pair ({ϕn}, {ψn}) is called regular if both Dϕ and Dψ are dense in H , and it is called semi-regular if either Dϕ and D(ϕ) or Dψ and D(ψ) are dense in H . In a previous paper [H. Inoue, J. Math. Phys. 57, 083511 (2016)], we have shown that if ({ϕn}, {ψn}) is a regular biorthogonal pair then both {ϕn} and {ψn} are generalized Riesz bases defined in the work of Inoue and Takakura [J. Math. Phys. 57, 083505 (2016)]. Here we shall show that the same result holds true if the pair is only semi-regular by using operators Tϕ,e, Te,ϕ, Tψ,e, and Te,ψ defined by an orthonormal basis e in H and a biorthogonal pair ({ϕn}, {ψn}). Furthermore, we shall apply this result to pseudo-bosons in the sense of the papers of Bagarello [J. Math. Phys. 51, 023531 (2010); J. Phys. A 44, 015205 (2011); Phys. Rev. A 88, 032120 (2013); and J. Math. Phys. 54, 063512 (2013)].

  8. Two vortex-blob regularization models for vortex sheet motion

    NASA Astrophysics Data System (ADS)

    Sohn, Sung-Ik

    2014-04-01

    Evolving vortex sheets generally form singularities in finite time. The vortex blob model is an approach to regularize the vortex sheet motion and evolve past singularity formation. In this paper, we thoroughly compare two such regularizations: the Krasny-type model and the Beale-Majda model. It is found from a linear stability analysis that both models have exponentially decaying growth rates for high wavenumbers, but the Beale-Majda model has a faster decaying rate than the Krasny model. The Beale-Majda model thus gives a stronger regularization to the solution. We apply the blob models to the two example problems: a periodic vortex sheet and an elliptically loaded wing. The numerical results show that the solutions of the two models are similar in large and small scales, but are fairly different in intermediate scales. The sheet of the Beale-Majda model has more spiral turns than the Krasny-type model for the same value of the regularization parameter δ. We give numerical evidences that the solutions of the two models agree for an increasing amount of spiral turns and tend to converge to the same limit as δ is decreased. The inner spiral turns of the blob models behave differently with the outer turns and satisfy a self-similar form. We also examine irregular motions of the sheet at late times and find that the irregular motions shrink as δ is decreased. This fact suggests a convergence of the blob solution to the weak solution of infinite regular spiral turns.

  9. Linking the Gauss-Bonnet-Chern theorem, essential HOPF maps and membrane solitons with exotic spin and statistics

    SciTech Connect

    Tze, Chia-Hsiung . Dept. of Physics)

    1989-01-01

    By way of the Gauss-Bonnet-Chern theorem, we present a higher dimensional extension of Polyakov's regularization of Wilson loops of point solitons. Spacetime paths of extended objects become hyper-ribbons with self-linking, twisting and writhing numbers. specifically we discuss the exotic spin and statistical phase entanglements of geometric n-membrane solitons of D-dimensional KP{sub 1} {sigma}-models with an added Hopf-Chern-Simons term where (n, D, K) = (0, 3, C), (2, 7, H), (6, 15, {Omega}). They are uniquely linked to the complex and quaternion and octonion division algebras. 22 refs.

  10. Application of real rock pore-threat statistics to a regular pore network model

    SciTech Connect

    Rakibul, M.; Sarker, H.; McIntyre, D.; Ferer, M.; Siddiqui, S.; Bromhal. G.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throat sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results

  11. Application of real rock pore-throat statistics to a regular pore network model

    SciTech Connect

    Sarker, M.R.; McIntyre, D.; Ferer, M.; Siddigui, S.; Bromhal. G.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throat sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results

  12. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  13. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  14. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    PubMed

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  15. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    PubMed Central

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  16. Separate Magnitude and Phase Regularization via Compressed Sensing

    PubMed Central

    Noll, Douglas C.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Compressed sensing (CS) has been used for accelerating magnetic resonance imaging (MRI) acquisitions, but its use in applications with rapid spatial phase variations is challenging, e.g., proton resonance frequency shift (PRF-shift) thermometry and velocity mapping. Previously, an iterative MRI reconstruction with separate magnitude and phase regularization was proposed for applications where magnitude and phase maps are both of interest, but it requires fully sampled data and unwrapped phase maps. In this paper, CS is combined into this framework to reconstruct magnitude and phase images accurately from undersampled data. Moreover, new phase regularization terms are proposed to accommodate phase wrapping and to reconstruct images with encoded phase variations, e.g., PRF-shift thermometry and velocity mapping. The proposed method is demonstrated with simulated thermometry data and in-vivo velocity mapping data and compared to conventional phase corrected CS. PMID:22552571

  17. Improved regularized solution of the inverse problem in turbidimetric measurements.

    PubMed

    Mroczka, Janusz; Szczuczyński, Damian

    2010-08-20

    We present results of simulation research on the constrained regularized least-squares (RLS) solution of the ill-conditioned inverse problem in turbidimetric measurements. The problem is formulated in terms of the discretized Fredholm integral equation of the first kind. The inverse problem in turbidimetric measurements consists in determining particle size distribution (PSD) function of particulate system on the basis of turbidimetric measurements. The desired PSD should satisfy two constraints: nonnegativity of PSD values and normalization of PSD to unity when integrated over the whole range of particle size. Incorporating the constraints into the RLS method leads to the constrained regularized least-squares (CRLS) method, which is realized by means of an active set algorithm of quadratic programming. Results of simulation research prove that the CRLS method performs considerably better with reconstruction of PSD than the RLS method in terms of better fidelity and smaller uncertainty.

  18. Regular Expression-Based Learning for METs Value Extraction.

    PubMed

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians.

  19. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  20. Regularizing the r-mode Problem for Nonbarotropic Relativistic Stars

    NASA Technical Reports Server (NTRS)

    Lockitch, Keith H.; Andersson, Nils; Watts, Anna L.

    2004-01-01

    We present results for r-modes of relativistic nonbarotropic stars. We show that the main differential equation, which is formally singular at lowest order in the slow-rotation expansion, can be regularized if one considers the initial value problem rather than the normal mode problem. However, a more physically motivated way to regularize the problem is to include higher order terms. This allows us to develop a practical approach for solving the problem and we provide results that support earlier conclusions obtained for uniform density stars. In particular, we show that there will exist a single r-mode for each permissible combination of 1 and m. We discuss these results and provide some caveats regarding their usefulness for estimates of gravitational-radiation reaction timescales. The close connection between the seemingly singular relativistic r-mode problem and issues arising because of the presence of co-rotation points in differentially rotating stars is also clarified.

  1. Statistical regularities in the rank-citation profile of scientists

    NASA Astrophysics Data System (ADS)

    Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro

    2011-12-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.

  2. Giant regular polyhedra from calixarene carboxylates and uranyl

    PubMed Central

    Pasquale, Sara; Sattin, Sara; Escudero-Adán, Eduardo C.; Martínez-Belmonte, Marta; de Mendoza, Javier

    2012-01-01

    Self-assembly of large multi-component systems is a common strategy for the bottom-up construction of discrete, well-defined, nanoscopic-sized cages. Icosahedral or pseudospherical viral capsids, built up from hundreds of identical proteins, constitute typical examples of the complexity attained by biological self-assembly. Chemical versions of the so-called 5 Platonic regular or 13 Archimedean semi-regular polyhedra are usually assembled combining molecular platforms with metals with commensurate coordination spheres. Here we report novel, self-assembled cages, using the conical-shaped carboxylic acid derivatives of calix[4]arene and calix[5]arene as ligands, and the uranyl cation UO22+ as a metallic counterpart, which coordinates with three carboxylates at the equatorial plane, giving rise to hexagonal bipyramidal architectures. As a result, octahedral and icosahedral anionic metallocages of nanoscopic dimensions are formed with an unusually small number of components. PMID:22510690

  3. Compound L0 regularization method for image blind motion deblurring

    NASA Astrophysics Data System (ADS)

    Liu, Qiaohong; Sun, Liping; Shao, Zeguo

    2016-09-01

    Blind image deblurring is one of the challenging problems in image processing and computer vision. The main purpose of blind image deblurring is to estimate the correct blur kernel and restore the latent image with edge-preservation, details-protection, and ringing suppression. In order to achieve ideal results, an innovative compound L0-regularized model is proposed to estimate the blur kernel by regularizing the sparsity property of natural images and two characteristics of blur kernel, such as continuity and sparsity. In the alternating direction framework, the split Bregman algorithm and half-quadratic splitting rule are alternatively employed to optimize the proposed kernel estimation model. Finally, a nonblind restoration method with ringing suppression is developed to obtain the ultimate latent image. Extensive experiments demonstrate the efficiency and viability of the proposed method compared with some state-of-the-art blind deblurring methods.

  4. Total Variation Regularization of Matrix-Valued Images

    PubMed Central

    Christiansen, Oddvar; Lee, Tin-Man; Lie, Johan; Sinha, Usha; Chan, Tony F.

    2007-01-01

    We generalize the total variation restoration model, introduced by Rudin, Osher, and Fatemi in 1992, to matrix-valued data, in particular, to diffusion tensor images (DTIs). Our model is a natural extension of the color total variation model proposed by Blomgren and Chan in 1998. We treat the diffusion matrix D implicitly as the product D = LLT, and work with the elements of L as variables, instead of working directly on the elements of D. This ensures positive definiteness of the tensor during the regularization flow, which is essential when regularizing DTI. We perform numerical experiments on both synthetical data and 3D human brain DTI, and measure the quantitative behavior of the proposed model. PMID:18256729

  5. Persistent low-grade inflammation and regular exercise.

    PubMed

    Astrom, Maj-Briit; Feigh, Michael; Pedersen, Bente Klarlund

    2010-01-01

    Persistent low-grade systemic inflammation is a feature of chronic diseases such as cardiovascular disease (CVD), type 2 diabetes and dementia and evidence exists that inflammation is a causal factor in the development of insulin resistance and atherosclerosis. Regular exercise offers protection against all of these diseases and recent evidence suggests that the protective effect of exercise may to some extent be ascribed to an anti-inflammatory effect of regular exercise. Visceral adiposity contributes to systemic inflammation and is independently associated with the occurrence of CVD, type 2 diabetes and dementia. We suggest that the anti-inflammatory effects of exercise may be mediated via a long-term effect of exercise leading to a reduction in visceral fat mass and/or by induction of anti-inflammatory cytokines with each bout of exercise.

  6. Existence and Regularity for Dynamic Viscoelastic Adhesive Contact with Damage

    SciTech Connect

    Kuttler, Kenneth L. Shillor, Meir Fernandez, Jose R.

    2006-01-15

    A model for the dynamic process of frictionless adhesive contact between a viscoelastic body and a reactive foundation, which takes into account the damage of the material resulting from tension or compression, is presented. Contact is described by the normal compliance condition. Material damage is modelled by the damage field, which measures the pointwise fractional decrease in the load-carrying capacity of the material, and its evolution is described by a differential inclusion. The model allows for different damage rates caused by tension or compression. The adhesion is modelled by the bonding field, which measures the fraction of active bonds on the contact surface. The existence of the unique weak solution is established using the theory of set-valued pseudomonotone operators introduced by Kuttler and Shillor (1999). Additional regularity of the solution is obtained when the problem data is more regular and satisfies appropriate compatibility conditions.

  7. Statistical regularities in the rank-citation profile of scientists

    PubMed Central

    Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress. PMID:22355696

  8. Mechanisms of evolution of avalanches in regular graphs.

    PubMed

    Handford, Thomas P; Pérez-Reche, Francisco J; Taraskin, Sergei N

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  9. Soft Constraints in Nonlinear Spectral Fitting with Regularized Lineshape Deconvolution

    PubMed Central

    Zhang, Yan; Shen, Jun

    2012-01-01

    This paper presents a novel method for incorporating a priori knowledge into regularized nonlinear spectral fitting as soft constraints. Regularization was recently introduced to lineshape deconvolution as a method for correcting spectral distortions. Here, the deconvoluted lineshape was described by a new type of lineshape model and applied to spectral fitting. The non-linear spectral fitting was carried out in two steps that were subject to hard constraints and soft constraints, respectively. The hard constraints step provided a starting point and, therefore, only the changes of the relevant variables were constrained in the soft constraints step and incorporated into the linear sub-steps of the Levenberg-Marquardt algorithm. The method was demonstrated using localized averaged echo time point resolved spectroscopy (PRESS) proton spectroscopy of human brains. PMID:22618964

  10. Validity and Regularization of Classical Half-Space Equations

    NASA Astrophysics Data System (ADS)

    Li, Qin; Lu, Jianfeng; Sun, Weiran

    2017-01-01

    Recent result (Wu and Guo in Commun Math Phys 336(3):1473-1553, 2015) has shown that over the 2D unit disk, the classical half-space equation (CHS) for the neutron transport does not capture the correct boundary layer behaviour as long believed. In this paper we develop a regularization technique for CHS to any arbitrary order and use its first-order regularization to show that in the case of the 2D unit disk, although CHS misrepresents the boundary layer behaviour, it does give the correct boundary condition for the interior macroscopic (Laplace) equation. Therefore CHS is still a valid equation to recover the correct boundary condition for the interior Laplace equation over the 2D unit disk.

  11. Regularization of inverse photomask synthesis to enhance manufacturability

    NASA Astrophysics Data System (ADS)

    Jia, Ningning; Wong, Alfred K.; Lam, Edmund Y.

    2009-12-01

    Mask manufacturability has been considered as a major issue in the adoption of inverse lithography (IL) in practice. With smaller technology nodes, IL distorts the mask pattern more aggressively. The distorted mask often contains curvilinear contour and irregular shapes, which cast a heavy computation burden on segmentation and data preparation. Total variation (TV) has been used for regularization in previous work, but it is not very effective in regulating the mask shape to be rectangular. In this paper, we apply TV regularization not only on the mask image but also on the mask edges, which forces the curves of edges to be more vertical or horizontal, because they give smaller TV values. Except for rectilinearity, a group of geometrical specifications of the mask pattern set by mask manufacture rule control (MRC) is also important for mask manufacturability. To prevent these characteristics from appearing, we also propose an intervention scheme into the optimization framework.

  12. Experimental evidence for formation mechanism of regular circular fringes

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhu, R.; Wang, G.; Wang, P.; Li, H.; Zhang, W.; Ren, G.

    2016-10-01

    Laser active suppressing jamming is one of the most effective technologies to cope with optoelectric imaging systems. In the process of carrying out laser disturbing experiment, regular circular fringes often appeared on the detector, besides laser spot converging by optical system. First of all, the formation of circular fringes has been experimentally investigated by using a simple converging lens to replace the complex optical system. Moreover, circular fringes have been simulated based on the interference theory of coherent light. The coherence between the experimental phenomena and the simulated results showed that the formation mechanism of regular circular fringes was the interference effect between reflected light by back surface of lens and directly refractive light on the detector. At last, the visibility of circular fringes has been calculated from 0.05 to 0.22 according to the current plating standard of lens surface and manufacture technique of optoelectric detector.

  13. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  14. Drop impact upon superhydrophobic surfaces with regular and hierarchical roughness

    NASA Astrophysics Data System (ADS)

    Lv, Cunjing; Hao, Pengfei; Zhang, Xiwen; He, Feng

    2016-04-01

    Recent studies demonstrate that roughness and morphologies of the textures play essential roles on the dynamics of water drop impacting onto superhydrophobic substrates. Particularly, significant reduction of contact time has greatly attracted people's attention. We experimentally investigate drop impact dynamics onto three types of superhydrophobic surfaces, consisting of regular micropillars, two-tier textures with nano/micro-scale roughness, and hierarchical textures with random roughness. It shows that the contact time is controlled by the Weber number and the roughness of the surface. Compared with drop impact on regular micropillared surfaces, the contact time can be finely reduced by increasing the Weber number on surfaces with two-tier textures, but can be remarkably reduced on surfaces with hierarchical textures resulting from the prompt splash and fragmentation of liquid lamellae. Our study may shed lights on textured materials fabrication, allowing a rapid drop detachment to realize broad applications.

  15. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    NASA Astrophysics Data System (ADS)

    Hopper, Christopher P.

    2016-10-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  16. Compressing Regular Expressions' DFA Table by Matrix Decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yanbing; Guo, Li; Liu, Ping; Tan, Jianlong

    Recently regular expression matching has become a research focus as a result of the urgent demand for Deep Packet Inspection (DPI) in many network security systems. Deterministic Finite Automaton (DFA), which recognizes a set of regular expressions, is usually adopted to cater to the need for real-time processing of network traffic. However, the huge memory usage of DFA prevents it from being applied even on a medium-sized pattern set. In this article, we propose a matrix decomposition method for DFA table compression. The basic idea of the method is to decompose a DFA table into the sum of a row vector, a column vector and a sparse matrix, all of which cost very little space. Experiments on typical rule sets show that the proposed method significantly reduces the memory usage and still runs at fast searching speed.

  17. Regular Expression-Based Learning for METs Value Extraction

    PubMed Central

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E.; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673

  18. Chaos at Uranus Spreads Dust Across the Regular Satellites

    NASA Astrophysics Data System (ADS)

    Tamayo, Dan; Burns, J. A.; Nicholson, P. D.; Hamilton, D. P.

    2012-05-01

    The short collision timescales between the Uranian irregular satellites argue for the past generation of vast quantities of dust at the outer reaches of Uranus’ Hill sphere (Bottke et al. 2010). Uranus’ extreme obliquity (98 degrees) renders the orbits of large objects unstable to eccentricity perturbations in the radial range a ≈ 60 - 75 Rp. (Tremaine et al. 2009). We study the effect on dust by investigating how the instability is modified by radiation pressure. We find that dust particles generated at the orbits of the irregular satellites move inward as radiation forces cause their orbits to decay (Burns et al. 1979). When they reach the unstable region, grain orbits undergo chaotic large-amplitude eccentricity oscillations that bring their pericenters inside the orbits of the regular satellites. We argue that the impact probabilities and expected spatial distribution across the satellite surfaces might explain the observed hemispherical color asymmetries common to the outer four regular satellites.

  19. Regularization of hidden dynamics in piecewise smooth flows

    NASA Astrophysics Data System (ADS)

    Novaes, Douglas D.; Jeffrey, Mike R.

    2015-11-01

    This paper studies the equivalence between differentiable and non-differentiable dynamics in Rn. Filippov's theory of discontinuous differential equations allows us to find flow solutions of dynamical systems whose vector fields undergo switches at thresholds in phase space. The canonical convex combination at the discontinuity is only the linear part of a nonlinear combination that more fully explores Filippov's most general problem: the differential inclusion. Here we show how recent work relating discontinuous systems to singular limits of continuous (or regularized) systems extends to nonlinear combinations. We show that if sliding occurs in a discontinuous systems, there exists a differentiable slow-fast system with equivalent slow invariant dynamics. We also show the corresponding result for the pinching method, a converse to regularization which approximates a smooth system by a discontinuous one.

  20. Gevrey regularity for the supercritical quasi-geostrophic equation

    NASA Astrophysics Data System (ADS)

    Biswas, Animikh

    2014-09-01

    In this paper, following the techniques of Foias and Temam, we establish suitable Gevrey class regularity of solutions to the supercritical quasi-geostrophic equations in the whole space, with initial data in “critical” Sobolev spaces. Moreover, the Gevrey class that we obtain is “near optimal” and as a corollary, we obtain temporal decay rates of higher order Sobolev norms of the solutions. Unlike the Navier-Stokes or the subcritical quasi-geostrophic equations, the low dissipation poses a difficulty in establishing Gevrey regularity. A new commutator estimate in Gevrey classes, involving the dyadic Littlewood-Paley operators, is established that allow us to exploit the cancellation properties of the equation and circumvent this difficulty.