Science.gov

Sample records for dimensionally regularized polyakov

  1. Dimensional regularization is generic

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo

    2016-09-01

    The absence of the quadratic divergence in the Higgs sector of the Standard Model in the dimensional regularization is usually regarded to be an exceptional property of a specific regularization. To understand what is going on in the dimensional regularization, we illustrate how to reproduce the results of the dimensional regularization for the λϕ4 theory in the more conventional regularization such as the higher derivative regularization; the basic postulate involved is that the quadratically divergent induced mass, which is independent of the scale change of the physical mass, is kinematical and unphysical. This is consistent with the derivation of the Callan-Symanzik equation, which is a comparison of two theories with slightly different masses, for the λϕ4 theory without encountering the quadratic divergence. In this sense the dimensional regularization may be said to be generic in a bottom-up approach starting with a successful low energy theory. We also define a modified version of the mass independent renormalization for a scalar field which leads to the homogeneous renormalization group equation. Implications of the present analysis on the Standard Model at high energies and the presence or absence of SUSY at LHC energies are briefly discussed.

  2. Dimensional regularization in configuration space

    SciTech Connect

    Bollini, C.G. |; Giambiagi, J.J.

    1996-05-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}

  3. Dimensional regularization and dimensional reduction in the light cone

    SciTech Connect

    Qiu, J.

    2008-06-15

    We calculate all of the 2 to 2 scattering process in Yang-Mills theory in the light cone gauge, with the dimensional regulator as the UV regulator. The IR is regulated with a cutoff in q{sup +}. It supplements our earlier work, where a Lorentz noncovariant regulator was used, and the final results bear some problems in gauge fixing. Supersymmetry relations among various amplitudes are checked by using the light cone superfields.

  4. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  5. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  6. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2014-03-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter , to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  7. Higher-Order Global Regularity of an Inviscid Voigt-Regularization of the Three-Dimensional Inviscid Resistive Magnetohydrodynamic Equations

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Titi, Edriss S.

    2013-05-01

    We prove existence, uniqueness, and higher-order global regularity of strong solutions to a particular Voigt-regularization of the three-dimensional inviscid resistive magnetohydrodynamic (MHD) equations. Specifically, the coupling of a resistive magnetic field to the Euler-Voigt model is introduced to form an inviscid regularization of the inviscid resistive MHD system. The results hold in both the whole space {{R}^3} and in the context of periodic boundary conditions. Weak solutions for this regularized model are also considered, and proven to exist globally in time, but the question of uniqueness for weak solutions is still open. Furthermore, we show that the solutions of the Voigt regularized system converge, as the regularization parameter {α → 0}, to strong solutions of the original inviscid resistive MHD, on the corresponding time interval of existence of the latter. Moreover, we also establish a new criterion for blow-up of solutions to the original MHD system inspired by this Voigt regularization.

  8. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D ‑ d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D ‑ 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  9. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    NASA Astrophysics Data System (ADS)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D - d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D - 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  10. Common Analytic Basis of the ζ-FUNCTION and the Dimensional Regularization Schemes

    NASA Astrophysics Data System (ADS)

    Lohiya, Daksh

    The analytic continuation invoked in the theory of generalized zeta functions associated with infinite-dimensional operators is shown to be equivalent in structure to the basic analytic methods deployed in dimensional regularization.

  11. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    PubMed

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.

  12. Nonet meson properties in the Nambu-Jona-Lasinio model with dimensional versus cutoff regularization

    SciTech Connect

    Inagaki, T.; Kimura, D.; Kohyama, H.; Kvinikhidze, A.

    2011-02-01

    The Nambu-Jona-Lasinio model with a Kobayashi-Maskawa-'t Hooft term is one low energy effective theory of QCD which includes the U{sub A}(1) anomaly. We investigate nonet meson properties in this model with three flavors of quarks. We employ two types of regularizations, the dimensional and sharp cutoff ones. The model parameters are fixed phenomenologically for each regularization. Evaluating the kaon decay constant, the {eta} meson mass and the topological susceptibility, we show the regularization dependence of the results and discuss the applicability of the Nambu-Jona-Lasinio model.

  13. On the Global Regularity of the Two-Dimensional Density Patch for Inhomogeneous Incompressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Liao, Xian; Zhang, Ping

    2016-06-01

    Regarding P.-L. Lions' open question in Oxford Lecture Series in Mathematics and its Applications, Vol. 3 (1996) concerning the propagation of regularity for the density patch, we establish the global existence of solutions to the two-dimensional inhomogeneous incompressible Navier-Stokes system with initial density given by {(1 - η){1}_{{Ω}0} + {1}_{{Ω}0c}} for some small enough constant {η} and some {W^{k+2,p}} domain {Ω0}, with initial vorticity belonging to {L1 \\cap Lp} and with appropriate tangential regularities. Furthermore, we prove that the regularity of the domain {Ω_0} is preserved by time evolution.

  14. A Regular Tetrahedron Formation Strategy for Swarm Robots in Three-Dimensional Environment

    NASA Astrophysics Data System (ADS)

    Ercan, M. Fikret; Li, Xiang; Liang, Ximing

    A decentralized control method, namely Regular Tetrahedron Formation (RTF), is presented for a swarm of simple robots operating in three-dimensional space. It is based on virtual spring mechanism and enables four neighboring robots to autonomously form a Regular Tetrahedron (RT) regardless of their initial positions. RTF method is applied to various sizes of swarms through a dynamic neighbor selection procedure. Each robot's behavior depends only on position of three dynamically selected neighbors. An obstacle avoidance model is also introduced. Final, algorithm is studied with computational experiments which demonstrated that it is effective.

  15. Regularization strategy for an inverse problem for a 1 + 1 dimensional wave equation

    NASA Astrophysics Data System (ADS)

    Korpela, Jussi; Lassas, Matti; Oksanen, Lauri

    2016-06-01

    An inverse boundary value problem for a 1 + 1 dimensional wave equation with a wave speed c(x) is considered. We give a regularization strategy for inverting the map { A } :c\\mapsto {{Λ }}, where Λ is the hyperbolic Neumann-to-Dirichlet map corresponding to the wave speed c. That is, we consider the case when we are given a perturbation of the Neumann-to-Dirichlet map \\tilde{{{Λ }}}={{Λ }}+{ E }, where { E } corresponds to the measurement errors, and reconstruct an approximative wave speed \\tilde{c}. We emphasize that \\tilde{{{Λ }}} may not be in the range of the map { A }. We show that the reconstructed wave speed \\tilde{c} satisfies \\parallel \\tilde{c}-c\\parallel ≤slant C\\parallel { E }{\\parallel }1/54. Our regularization strategy is based on a new formula to compute c from Λ.

  16. Dimensional regularization of the third post-Newtonian gravitational wave generation from two point masses

    SciTech Connect

    Blanchet, Luc; Esposito-Farese, Gilles; Damour, Thibault; Iyer, Bala R.

    2005-06-15

    Dimensional regularization is applied to the computation of the gravitational wave field generated by compact binaries at the third post-Newtonian (3PN) approximation. We generalize the wave generation formalism from isolated post-Newtonian matter systems to d spatial dimensions, and apply it to point masses (without spins), modeled by delta-function singularities. We find that the quadrupole moment of point-particle binaries in harmonic coordinates contains a pole when {epsilon}{identical_to}d-3{yields}0 at the 3PN order. It is proved that the pole can be renormalized away by means of the same shifts of the particle world lines as in our recent derivation of the 3PN equations of motion. The resulting renormalized (finite when {epsilon}{yields}0) quadrupole moment leads to unique values for the ambiguity parameters {xi}, {kappa}, and {zeta}, which were introduced in previous computations using Hadamard's regularization. Several checks of these values are presented. These results complete the derivation of the gravitational waves emitted by inspiralling compact binaries up to the 3.5PN level of accuracy which is needed for detection and analysis of the signals in the gravitational wave antennas LIGO/VIRGO and LISA.

  17. Globally regular instability of 3-dimensional anti-de Sitter spacetime.

    PubMed

    Bizoń, Piotr; Jałmużna, Joanna

    2013-07-26

    We consider three-dimensional anti-de Sitter (AdS) gravity minimally coupled to a massless scalar field and study numerically the evolution of small smooth circularly symmetric perturbations of the AdS3 spacetime. As in higher dimensions, for a large class of perturbations, we observe a turbulent cascade of energy to high frequencies which entails instability of AdS3. However, in contrast to higher dimensions, the cascade cannot be terminated by black hole formation because small perturbations have energy below the black hole threshold. This situation appears to be challenging for the cosmic censor. Analyzing the energy spectrum of the cascade we determine the width ρ(t) of the analyticity strip of solutions in the complex spatial plane and argue by extrapolation that ρ(t) does not vanish in finite time. This provides evidence that the turbulence is too weak to produce a naked singularity and the solutions remain globally regular in time, in accordance with the cosmic censorship hypothesis. PMID:23931347

  18. Accelerated motion corrected three‐dimensional abdominal MRI using total variation regularized SENSE reconstruction

    PubMed Central

    Atkinson, David; Buerger, Christian; Schaeffter, Tobias; Prieto, Claudia

    2015-01-01

    Purpose Develop a nonrigid motion corrected reconstruction for highly accelerated free‐breathing three‐dimensional (3D) abdominal images without external sensors or additional scans. Methods The proposed method accelerates the acquisition by undersampling and performs motion correction directly in the reconstruction using a general matrix description of the acquisition. Data are acquired using a self‐gated 3D golden radial phase encoding trajectory, enabling a two stage reconstruction to estimate and then correct motion of the same data. In the first stage total variation regularized iterative SENSE is used to reconstruct highly undersampled respiratory resolved images. A nonrigid registration of these images is performed to estimate the complex motion in the abdomen. In the second stage, the estimated motion fields are incorporated in a general matrix reconstruction, which uses total variation regularization and incorporates k‐space data from multiple respiratory positions. The proposed approach was tested on nine healthy volunteers and compared against a standard gated reconstruction using measures of liver sharpness, gradient entropy, visual assessment of image sharpness and overall image quality by two experts. Results The proposed method achieves similar quality to the gated reconstruction with nonsignificant differences for liver sharpness (1.18 and 1.00, respectively), gradient entropy (1.00 and 1.00), visual score of image sharpness (2.22 and 2.44), and visual rank of image quality (3.33 and 3.39). An average reduction of the acquisition time from 102 s to 39 s could be achieved with the proposed method. Conclusion In vivo results demonstrate the feasibility of the proposed method showing similar image quality to the standard gated reconstruction while using data corresponding to a significantly reduced acquisition time. Magn Reson Med, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of

  19. Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis

    NASA Astrophysics Data System (ADS)

    Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.

    2005-03-01

    The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..

  20. Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis.

    PubMed

    Zimmer, Walter M X; Tyack, Peter L; Johnson, Mark P; Madsen, Peter T

    2005-03-01

    The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 microPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20 degrees off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 microPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 microPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 microPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives.

  1. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    SciTech Connect

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou, “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].

  2. Existence and Regularity of Invariant Measures for the Three Dimensional Stochastic Primitive Equations

    SciTech Connect

    Glatt-Holtz, Nathan; Kukavica, Igor Ziane, Mohammed; Vicol, Vlad

    2014-05-15

    We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.

  3. Connecting Polyakov loops to the thermodynamics of SU(Nc) gauge theories using the gauge-string duality

    NASA Astrophysics Data System (ADS)

    Noronha, Jorge

    2010-02-01

    We show that in four-dimensional gauge theories dual to five-dimensional Einstein gravity coupled to a single scalar field in the bulk, the derivative of the single heavy quark free energy in the deconfined phase is dFQ(T)/dT˜-1/cs2(T), where cs(T) is the speed of sound. This general result provides a direct link between the softest point in the equation of state of strongly-coupled plasmas and the deconfinement phase transition described by the expectation value of the Polyakov loop. We give an explicit example of a gravity dual with black hole solutions that can reproduce the lattice results for the expectation value of the Polyakov loop and the thermodynamics of SU(3) Yang-Mills theory in the (nonperturbative) temperature range between Tc and 3Tc.

  4. Regularization of two-dimensional supersymmetric Yang-Mills theory via non-commutative geometry

    NASA Astrophysics Data System (ADS)

    Valavane, K.

    2000-11-01

    The non-commutative geometry is a possible framework to regularize quantum field theory in a non-perturbative way. This idea is an extension of the lattice approximation by non-commutativity that allows us to preserve symmetries. The supersymmetric version is also studied and more precisely in the case of the Schwinger model on a supersphere. This paper is a generalization of this latter work to more general gauge groups.

  5. Visualizations of coherent center domains in local Polyakov loops

    SciTech Connect

    Stokes, Finn M. Kamleh, Waseem; Leinweber, Derek B.

    2014-09-15

    Quantum Chromodynamics exhibits a hadronic confined phase at low to moderate temperatures and, at a critical temperature T{sub C}, undergoes a transition to a deconfined phase known as the quark–gluon plasma. The nature of this deconfinement phase transition is probed through visualizations of the Polyakov loop, a gauge independent order parameter. We produce visualizations that provide novel insights into the structure and evolution of center clusters. Using the HMC algorithm the percolation during the deconfinement transition is observed. Using 3D rendering of the phase and magnitude of the Polyakov loop, the fractal structure and correlations are examined. The evolution of the center clusters as the gauge fields thermalize from below the critical temperature to above it are also exposed. We observe deconfinement proceeding through a competition for the dominance of a particular center phase. We use stout-link smearing to remove small-scale noise in order to observe the large-scale evolution of the center clusters. A correlation between the magnitude of the Polyakov loop and the proximity of its phase to one of the center phases of SU(3) is evident in the visualizations. - Highlights: • We produce visualizations of center clusters in Polyakov loops. • The evolution of center clusters with HMC simulation time is examined. • Visualizations provide novel insights into the percolation of center clusters. • The magnitude and phase of the Polyakov loop are studied. • A correlation between the magnitude and center phase proximity is evident.

  6. Logarithmical regularity criterion of the three-dimensional Boussinesq equations in terms of the pressure

    NASA Astrophysics Data System (ADS)

    Mechdene, Mohamed; Gala, Sadek; Guo, Zhengguang; Ragusa, Alessandra Maria

    2016-10-01

    This work establishes a sufficient condition for the regularity criterion of the Boussinesq equation in terms of the derivative of the pressure in one direction. It is shown that if the partial derivative of the pressure {partial 3π } satisfies the logarithmical Serrin-type condition int0TVert partial 3π (s)Vert_{L^{λ }}q/1+ln (1+Vert θ Vert_{L4)} {d}s < ∞ quad {with}quad2/q+3/λ =7/4quad {and}quad12/7 < λ ≤ ∞, then the solution {(u,θ )} remains smooth on {[0,T]}. Compared to the Navier-Stokes result, there is a logarithmic correction involving {θ} in the denominator.

  7. Polyakov loop, diquarks, and the two-flavor phase diagram

    SciTech Connect

    Roessner, S.; Weise, W.; Ratti, C.

    2007-02-01

    An updated version of the PNJL model is used to study the thermodynamics of N{sub f}=2 quark flavors interacting through chiral four-point couplings and propagating in a homogeneous Polyakov loop background. Previous PNJL calculations are extended by introducing explicit diquark degrees of freedom and an improved effective potential for the Polyakov loop field. The mean field equations are treated under the aspect of accommodating group theoretical constraints and issues arising from the fermion sign problem. The input is fixed exclusively by selected pure-gauge lattice QCD results and by pion properties in vacuum. The resulting (T,{mu}) phase diagram is studied with special emphasis on the critical point, its dependence on the quark mass and on Polyakov loop dynamics. We present successful comparisons with lattice QCD thermodynamics expanded to finite chemical potential {mu}.

  8. The coexistence of a 't Hooft-Polyakov monopole and a one-half monopole

    NASA Astrophysics Data System (ADS)

    Teh, Rosy; Ng, Ban-Loong; Wong, Khai-Ming

    2014-03-01

    Recently we have reported on the existence of finite energy SU(2) Yang-Mills-Higgs particle of one-half topological charge. In this paper, we show that this one-half monopole can co-exist with a 't Hooft-Polyakov monopole. The magnetic charge of the one-half monopole is -1/2 while the magnetic charge of the 't Hooft-Polyakov monopole is positive unity. However the net magnetic charge of the configuration is zero due to the presence of a semi-infinite Dirac string along the positive z-axis that carries the magnetic monopole charge of another -1/2. The solution possesses gauge potentials that are singular along the z-axis, elsewhere they are regular. This monopole configuration possesses finite total energy and magnetic dipole moment. The total energy is found to increase with the strength of the Higgs field self-coupling constant λ. However the dipole separation and the magnetic dipole moment decrease with λ. This solution is non-BPS even in the BPS limit when the Higgs self-coupling constant vanishes.

  9. Seiberg-Witten and 'Polyakov-like' Magnetic Bion Confinements are Continuously Connected

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2012-06-01

    We study four-dimensional N = 2 supersymmetric pure-gauge (Seiberg-Witten) theory and its N = 1 mass perturbation by using compactification on S{sup 1} x R{sup 3}. It is well known that on R{sup 4} (or at large S{sup 1} size L) the perturbed theory realizes confinement through monopole or dyon condensation. At small S{sup 1}, we demonstrate that confinement is induced by a generalization of Polyakov's three-dimensional instanton mechanism to a locally four-dimensional theory - the magnetic bion mechanism - which also applies to a large class of nonsupersymmetric theories. Using a large- vs. small-L Poisson duality, we show that the two mechanisms of confinement, previously thought to be distinct, are in fact continuously connected.

  10. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  11. Regularity of the Interfaces with Sign Changes of Solutions of the One-Dimensional Porous Medium Equation

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Shigeru

    2002-01-01

    In previous papers we considered the Cauchy problem for the one-dimensional evolution p-Laplacian equation for nonzero, bounded, and nonnegative initial data having compact support, and showed that after a finite time the set of spatial critical points of the nonnegative solution u=u(x, t) in {u>0} consists of one point, the spatial maximum point of u, and the curve of the spatial maximum points is continuous with respect to the time variable. Since the spatial derivative ∂xu satisfies the porous medium equation with sign changes, the curve of the spatial maximum points is regarded as an interface with sign changes of ∂xu. On the other hand, in a paper by M. Bertsch and D. Hilhorst (1991, Appl. Anal.41, 111-130) the interfaces where the solutions change their sign were studied in detail for the initial-boundary value problems of the generalized porous medium equation over two-dimensional cylinders. But the monotonicity of the initial data is assumed there. As is noted in Section 4 of our earlier work (1996, J. Math. Anal. Appl.203, 78-103), the monotonicity of ∂xu(ċ, t) in some neighborhood of the spatial maximum point of u(ċ, t) cannot be assumed, and therefore, if this monotonicity for some large t>0 is proved, then by the method of Bertsch and Hilhorst (cited above) one may get more precise regularity properties of the curve of the spatial maximum points. The purpose of the present paper is twofold. One is to remove some monotonicity assumption for initial data in Bertsch and Hilhorst's theorem concerning the regularity of the interfaces with sign changes of solutions of the one-dimensional generalized porous medium equation. By comparing the solution with appropriate symmetric nonnegative solutions we shall get the monotonicity of the solution near the interface after a finite time. The other is as a by-product of the method to get C1 regularity of the curves of the spatial maximum points of nonnegative solutions of the Cauchy problem for the evolution

  12. Second-order equation of state with the Skyrme interaction: Cutoff and dimensional regularization with the inclusion of rearrangement terms

    NASA Astrophysics Data System (ADS)

    Yang, C. J.; Grasso, M.; Roca-Maza, X.; Colò, G.; Moghrabi, K.

    2016-09-01

    We evaluate the second-order (beyond-mean-field) contribution to the equation of state of nuclear matter with the effective Skyrme force and use cutoff and dimensional regularizations to treat the ultraviolet divergence produced by the zero-range character of this interaction. An adjustment of the force parameters is then performed in both cases to remove any double counting generated by the explicit computation of beyond-mean-field corrections with the Skyrme force. In addition, we include at second order the rearrangement terms associated with the density-dependent part of the Skyrme force and discuss their effect. Sets of parameters are proposed to define new effective forces which are specially designed for second-order calculations in nuclear matter.

  13. Uniform Regularity and Vanishing Dissipation Limit for the Full Compressible Navier-Stokes System in Three Dimensional Bounded Domain

    NASA Astrophysics Data System (ADS)

    Wang, Yong

    2016-09-01

    In the present paper, we study the uniform regularity and vanishing dissipation limit for the full compressible Navier-Stokes system whose viscosity and heat conductivity are allowed to vanish at different orders. The problem is studied in a three dimensional bounded domain with Navier-slip type boundary conditions. It is shown that there exists a unique strong solution to the full compressible Navier-Stokes system with the boundary conditions in a finite time interval which is independent of the viscosity and heat conductivity. The solution is uniformly bounded in {W^{1,infty}} and is a conormal Sobolev space. Based on such uniform estimates, we prove the convergence of the solutions of the full compressible Navier-Stokes to the corresponding solutions of the full compressible Euler system in {L^infty(0,T; L^2)}, {L^infty(0,T; H1)} and {L^infty([0,T]×Ω)} with a rate of convergence.

  14. Regularization Method for Predicting an Ordinal Response Using Longitudinal High-dimensional Genomic Data

    PubMed Central

    Hou, Jiayi

    2015-01-01

    An ordinal scale is commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical methodology based on statistical inference, in particular, ordinal modeling has contributed to the analysis of data in which the response categories are ordered and the number of covariates (p) remains smaller than the sample size (n). With the emergence of genomic technologies being increasingly applied for more accurate diagnosis and prognosis, high-dimensional data where the number of covariates (p) is much larger than the number of samples (n), are generated. To meet the emerging needs, we introduce our proposed model which is a two-stage algorithm: Extend the Generalized Monotone Incremental Forward Stagewise (GMIFS) method to the cumulative logit ordinal model; and combine the GMIFS procedure with the classical mixed-effects model for classifying disease status in disease progression along with time. We demonstrate the efficiency and accuracy of the proposed models in classification using a time-course microarray dataset collected from the Inflammation and the Host Response to Injury study. PMID:25720102

  15. The Polyakov relation for the sphere and higher genus surfaces

    NASA Astrophysics Data System (ADS)

    Menotti, Pietro

    2016-05-01

    The Polyakov relation, which in the sphere topology gives the changes of the Liouville action under the variation of the position of the sources, is also related in the case of higher genus to the dependence of the action on the moduli of the surface. We write and prove such a relation for genus 1 and for all hyperelliptic surfaces.

  16. Simplicial pseudorandom lattice study of a three-dimensional Abelian gauge model, the regular lattice as an extremum of the action

    SciTech Connect

    Pertermann, D.; Ranft, J.

    1986-09-15

    We introduce a simplicial pseudorandom version of lattice gauge theory. In this formulation it is possible to interpolate continuously between a regular simplicial lattice and a pseudorandom lattice. Using this method we study a simple three-dimensional Abelian lattice gauge theory. Calculating average plaquette expectation values, we find an extremum of the action for our regular simplicial lattice. Such a behavior was found in analytical studies in one and two dimensions.

  17. Duality and the Knizhnik-Polyakov-Zamolodchikov relation in Liouville quantum gravity.

    PubMed

    Duplantier, Bertrand; Sheffield, Scott

    2009-04-17

    We present a (mathematically rigorous) probabilistic and geometrical proof of the Knizhnik-Polyakov-Zamolodchikov relation between scaling exponents in a Euclidean planar domain D and in Liouville quantum gravity. It uses the properly regularized quantum area measure dmicro_{gamma}=epsilon;{gamma;{2}/2}e;{gammah_{epsilon}(z)}dz, where dz is the Lebesgue measure on D, gamma is a real parameter, 02 is shown to be related to the quantum measure dmu_{gamma;{'}}, gamma;{'}<2, by the fundamental duality gammagamma;{'}=4.

  18. Visualizations of coherent center domains in local Polyakov loops

    NASA Astrophysics Data System (ADS)

    Stokes, Finn M.; Kamleh, Waseem; Leinweber, Derek B.

    2014-09-01

    Quantum Chromodynamics exhibits a hadronic confined phase at low to moderate temperatures and, at a critical temperature TC, undergoes a transition to a deconfined phase known as the quark-gluon plasma. The nature of this deconfinement phase transition is probed through visualizations of the Polyakov loop, a gauge independent order parameter. We produce visualizations that provide novel insights into the structure and evolution of center clusters. Using the HMC algorithm the percolation during the deconfinement transition is observed. Using 3D rendering of the phase and magnitude of the Polyakov loop, the fractal structure and correlations are examined. The evolution of the center clusters as the gauge fields thermalize from below the critical temperature to above it are also exposed. We observe deconfinement proceeding through a competition for the dominance of a particular center phase. We use stout-link smearing to remove small-scale noise in order to observe the large-scale evolution of the center clusters. A correlation between the magnitude of the Polyakov loop and the proximity of its phase to one of the center phases of SU(3) is evident in the visualizations.

  19. From chiral quark dynamics with Polyakov loop to the hadron resonance gas model

    SciTech Connect

    Arriola, E. R.; Salcedo, L. L.; Megias, E.

    2013-03-25

    Chiral quark models with Polyakov loop at finite temperature have been often used to describe the phase transition. We show how the transition to a hadron resonance gas is realized based on the quantum and local nature of the Polyakov loop.

  20. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR).

    PubMed

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-21

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and did

  1. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    NASA Astrophysics Data System (ADS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T.; Cooper, Benjamin J.; Kuncic, Zdenka; Keall, Paul J.

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and did

  2. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and

  3. Fuzzy bags, Polyakov loop and gauge/string duality

    NASA Astrophysics Data System (ADS)

    Zuo, Fen

    2014-11-01

    Confinement in SU(N) gauge theory is due to the linear potential between colored objects. At short distances, the linear contribution could be considered as the quadratic correction to the leading Coulomb term. Recent lattice data show that such quadratic corrections also appear in the deconfined phase, in both the thermal quantities and the Polyakov loop. These contributions are studied systematically employing the gauge/string duality. "Confinement" in N = 4 SU(N) Super Yang-Mills (SYM) theory could be achieved kinematically when the theory is defined on a compact space manifold. In the large-N limit, deconfinement of N = 4 SYM on {{Bbb S}^3} at strong coupling is dual to the Hawking-Page phase transition in the global Anti-de Sitter spacetime. Meantime, all the thermal quantities and the Polyakov loop achieve significant quadratic contributions. Similar results can also be obtained at weak coupling. However, when confinement is induced dynamically through the local dilaton field in the gravity-dilaton system, these contributions can not be generated consistently. This is in accordance with the fact that there is no dimension-2 gauge-invariant operator in the boundary gauge theory. Based on these results, we suspect that quadratic corrections, and also confinement, should be due to global or non-local effects in the bulk spacetime.

  4. Entropy-based viscous regularization for the multi-dimensional Euler equations in low-Mach and transonic flows

    SciTech Connect

    Marc O Delchini; Jean E. Ragusa; Ray A. Berry

    2015-07-01

    We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.

  5. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    NASA Astrophysics Data System (ADS)

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  6. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  7. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  8. Short and independent characteristic methods for discrete ordinates radiation transport with two-dimensional and three-dimensional regular Cartesian meshes

    NASA Astrophysics Data System (ADS)

    Suriano, Mark Allen

    2001-07-01

    Accurate, reliable, and robust discrete neutral particle radiation transport codes are needed in order to perform realistic 3D engineering calculations. Current neutron transport codes use low order spatial quadratures that are inaccurate unless a highly refined spatial mesh is used. In this work various higher order characteristic spatial quadratures are derived, implemented, and tested. Regular meshes of rectangular (2D) and of rectangular parallelepiped (boxoid) cells are supported. Short characteristic (linear characteristic [LC] and exponential characteristic [EC]) methods are compared with the corresponding independent characteristic (ILC and IEC) methods. The latter readily provide for plane parallel implementation. All transport results were benchmarked against Monte Carlo calculations. The diamond difference (DD) method was also tested and compared to the characteristic spatial quadratures. IEC and EC were found to be robust, reliable, and accurate for thin, intermediate, and optically thick cells. LC was robust, reliable, and accurate for cells of thin to intermediate (approximately 2 mean free paths) optical thickness. ILC was not pursued in 3D due to its anticipated excessive computational cost. DD was unreliable (as expected) over the range of test problems. We conclude that IEC and EC are apt methods for a wide range of problems, and provide the ability to perform realistic engineering calculations on coarse cells given nonnegative group-to-group, ordinate-to-ordinate cross section data.

  9. Regular and Chaotic Ray and Wave Mappings for Two and Three-Dimensional Systems with Applications to a Periodically Perturbed Waveguide.

    NASA Astrophysics Data System (ADS)

    Ratowsky, Ricky Paul

    We investigate quantum or wave dynamics for a system which is stochastic in the classical or eikonal (ray) limit. This system is a mapping which couples the standard mapping to an additional degree of freedom. We observe numerically, in most but not all cases, the asymptotic (in time) limitation of diffusion in the classically strongly chaotic regime, and the inhibition of Arnold diffusion when there exist KAM surfaces classically. We present explicitly the two-dimensional asymptotic localized distributions for each case, when they exist. The scaling of the characteristic widths of the localized distributions with coupling strength has been determined. A simple model accounts for the observed behavior in the limit of weak coupling, and we derive a scaling law for the diffusive time scale in the system. We explore some implications of the wave mapping for a class of optical or acoustical systems: a parallel plate waveguide or duct with a periodically perturbed boundary (a grating), and a lens waveguide with nonlinear focusing elements. We compute the ray trajectories of each system, using a Poincare surface of section to study the dynamics. Each system leads to a near-integrable ray Hamiltonian: the study the dynamics. Each system leads to a near-integrable ray Hamiltonian: the phase space splits into regions showing regular or chaotic behavior. The solutions to the scalar Helmholtz equation are found via a secular equation determining the eigenfrequencies. A wave mapping is derived for the system in the paraxial regime. We find that localization should occur, limiting the beam spread in both wavevector and configuration space. In addition, we consider the effect of retaining higher order terms in the paraxial expansion. Although we focus largely on the two dimensional case, we make some remarks concerning the four dimensional mapping for this system.

  10. Constituent Quarks and Gluons, Polyakov loop and the Hadron Resonance Gas Model ***

    NASA Astrophysics Data System (ADS)

    Megías, E.; Ruiz Arriola, E.; Salcedo, L. L.

    2014-03-01

    Based on first principle QCD arguments, it has been argued in [1] that the vacuum expectation value of the Polyakov loop can be represented in the hadron resonance gas model. We study this within the Polyakov-constituent quark model by implementing the quantum and local nature of the Polyakov loop [2, 3]. The existence of exotic states in the spectrum is discussed. Presented by E. Megías at the International Nuclear Physics Conference INPC 2013, 2-7 June 2013, Firenze, Italy.Supported by Plan Nacional de Altas Energías (FPA2011-25948), DGI (FIS2011-24149), Junta de Andalucía grant FQM-225, Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), Spanish MINECO's Centro de Excelencia Severo Ochoa Program grant SEV-2012-0234, and the Juan de la Cierva Program.

  11. Propagator, sewing rules, and vacuum amplitude for the Polyakov point particles with ghosts

    SciTech Connect

    Giannakis, I.; Ordonez, C.R.; Rubin, M.A.; Zucchini, R.

    1989-01-01

    The authors apply techniques developed for strings to the case of the spinless point particle. The Polyakov path integral with ghosts is used to obtain the propagator and one-loop vacuum amplitude. The propagator is shown to correspond to the Green's function for the BRST field theory in Siegel gauge. The reparametrization invariance of the Polyakov path integral is shown to lead automatically to the correct trace log result for the one-loop diagram, despite the fact that naive sewing of the ends of a propagator would give an incorrect answer. This type of failure of naive sewing is identical to that found in the string case. The present treatment provides, in the simplified context of the point particle, a pedagogical introduction to Polyakov path integral methods with and without ghosts.

  12. Extensions and further applications of the nonlocal Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Hell, T.; Weise, W.; Kashiwa, K.

    2011-06-01

    The nonlocal Polyakov-loop-extended Nambu-Jona-Lasinio model is further improved by including momentum-dependent wave-function renormalization in the quark quasiparticle propagator. Both two- and three-flavor versions of this improved Polyakov-loop-extended Nambu-Jona-Lasinio model are discussed, the latter with inclusion of the (nonlocal) 't Hooft-Kobayashi-Maskawa determinant interaction in order to account for the axial U(1) anomaly. Thermodynamics and phases are investigated and compared with recent lattice-QCD results.

  13. Nonperturbative study of the 't Hooft-Polyakov monopole form factors

    NASA Astrophysics Data System (ADS)

    Rajantie, Arttu; Weir, David J.

    2012-01-01

    The mass and interactions of a quantum ’t Hooft-Polyakov monopole are measured nonperturbatively using correlation functions in lattice Monte Carlo simulations. A method of measuring the form factors for interactions between the monopole and fundamental particles, such as the photon, is demonstrated. These quantities are potentially of experimental relevance in searches for magnetic monopoles.

  14. Phase transition of strongly interacting matter with a chemical potential dependent Polyakov loop potential

    NASA Astrophysics Data System (ADS)

    Shao, Guo-yun; Tang, Zhan-duo; Di Toro, Massimo; Colonna, Maria; Gao, Xue-yan; Gao, Ning

    2016-07-01

    We construct a hadron-quark two-phase model based on the Walecka-quantum hadrodynamics and the improved Polyakov-Nambu-Jona-Lasinio (PNJL) model with an explicit chemical potential dependence of Polyakov loop potential (μ PNJL model). With respect to the original PNJL model, the confined-deconfined phase transition is largely affected at low temperature and large chemical potential. Using the two-phase model, we investigate the equilibrium transition between hadronic and quark matter at finite chemical potentials and temperatures. The numerical results show that the transition boundaries from nuclear to quark matter move towards smaller chemical potential (lower density) when the μ -dependent Polyakov loop potential is taken. In particular, for charge asymmetric matter, we compute the local asymmetry of u , d quarks in the hadron-quark coexisting phase, and analyze the isospin-relevant observables possibly measurable in heavy-ion collision (HIC) experiments. In general new HIC data on the location and properties of the mixed phase would bring relevant information on the expected chemical potential dependence of the Polyakov loop contribution.

  15. Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.

    PubMed

    Andreev, Oleg

    2009-05-29

    We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range.

  16. Thermodynamics of a three-flavor nonlocal Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.

    2010-04-01

    The present work generalizes a nonlocal version of the Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL) model to the case of three active quark flavors, with inclusion of the axial U(1) anomaly. Gluon dynamics is incorporated through a gluonic background field, expressed in terms of the Polyakov loop. The thermodynamics of the nonlocal PNJL model accounts for both chiral and deconfinement transitions. Our results obtained in mean-field approximation are compared to lattice QCD results for N{sub f}=2+1 quark flavors. Additional pionic and kaonic contributions to the pressure are calculated in random phase approximation. Finally, this nonlocal three-flavor PNJL model is applied to the finite density region of the QCD phase diagram. It is confirmed that the existence and location of a critical point in this phase diagram depend sensitively on the strength of the axial U(1) breaking interaction.

  17. Hydrodynamics of the Polyakov line in SU(Nc) Yang-Mills

    NASA Astrophysics Data System (ADS)

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2016-02-01

    We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite Nc for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of Nc, and are consistent with the string model results at Nc = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out of equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z (Nc) bubble using a piece-wise sound wave is suggested.

  18. Nonconvex Regularization in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Tuia, Devis; Flamary, Remi; Barlaud, Michel

    2016-11-01

    In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.

  19. Phase diagram and critical properties in the Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Sousa, C. A. de; Costa, P.; Ruivo, M. C.; Hansen, H.

    2011-05-23

    We investigate the phase diagram of the so-called Polyakov-Nambu-Jona-Lasinio model at finite temperature and nonzero chemical potential with three quark flavours. Chiral and deconfinement phase transitions are discussed, and the relevant order-like parameters are analyzed. The results are compared with simple thermodynamic expectations and lattice data. A special attention is payed to the critical end point: as the strength of the flavour-mixing interaction becomes weaker, the critical end point moves to low temperatures and can even disappear.

  20. Dynamics and thermodynamics of a nonlocal Polyakov--Nambu--Jona-Lasinio model with running coupling

    SciTech Connect

    Hell, T.; Roessner, S.; Cristoforetti, M.; Weise, W.

    2009-01-01

    A nonlocal covariant extension of the two-flavor Nambu and Jona-Lasinio model is constructed, with built-in constraints from the running coupling of QCD at high-momentum and instanton physics at low-momentum scales. Chiral low-energy theorems and basic current algebra relations involving pion properties are shown to be reproduced. The momentum-dependent dynamical quark mass derived from this approach is in agreement with results from Dyson-Schwinger equations and lattice QCD. At finite temperature, inclusion of the Polyakov loop and its gauge invariant coupling to quarks reproduces the dynamical entanglement of the chiral and deconfinement crossover transitions as in the (local) Polyakov-loop-extended Nambu and Jona-Lasinio model, but now without the requirement of introducing an artificial momentum cutoff. Steps beyond the mean-field approximation are made including mesonic correlations through quark-antiquark ring summations. Various quantities of interest (pressure, energy density, speed of sound, etc.) are calculated and discussed in comparison with lattice QCD thermodynamics at zero chemical potential. The extension to finite quark chemical potential and the phase diagram in the (T,{mu})-plane are also discussed.

  1. A two-dimensional locally regularized strain estimation technique: preliminary clinical results for the assessment of benign and malignant breast lesions

    NASA Astrophysics Data System (ADS)

    Brusseau, Elisabeth; Detti, Valérie; Coulon, Agnès; Maissiat, Emmanuèle; Boublay, Nawèle; Berthezène, Yves; Fromageau, Jérémie; Bush, Nigel; Bamber, Jeffrey

    2011-03-01

    We previously developed a 2D locally regularized strain estimation technique that was already validated with ex vivo tissues. In this study, our technique is assessed with in vivo data, by examining breast abnormalities in clinical conditions. Method reliability is analyzed as well as tissue strain fields according to the benign or malignant character of the lesion. Ultrasound RF data were acquired in two centers on ten lesions, five being classified as fibroadenomas, the other five being classified as malignant tumors, mainly ductal carcinomas from grades I to III. The estimation procedure we developed involves maximizing a similarity criterion (the normalized correlation coefficient or NCC) between pre- and post-compression images, the deformation effects being considered. The probability of correct strain estimation is higher if this coefficient is closer to 1. Results demonstrated the ability of our technique to provide good-quality strain images with clinical data. For all lesions, movies of tissue strain during compression were obtained, with strains that can reach 15%. The NCC averaged over each movie was computed, leading for the ten cases to a mean value of 0.93, a minimum value of 0.87 and a maximum value of 0.98. These high NCC values confirm the reliability of the strain estimation. Moreover, lesions were clearly identified for the ten cases investigated. Finally, we have observed with malignant lesions that compared to ultrasound data, strain images can put in relief a more important lesion size, and can help in evaluating the lesion invasive character.

  2. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  3. Transport Code for Regular Triangular Geometry

    1993-06-09

    DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.

  4. A technique for calculating the γ-matrix structures of the diagrams of a total four-fermion interaction with infinite number of vertices in d=2+ɛ dimensional regularization

    NASA Astrophysics Data System (ADS)

    Vasil'Ev, A. N.; Derkachev, S. É.; Kivel', N. A.

    1995-05-01

    It is known [1] that in d=2+ɛ dimensional regularization any four-fermion interaction generates an infinite number of counterterms of the form MediaObjects/11232_2005_BF02274026_f1.jpg , where 11232_2005_Article_BF02274026_TeX2GIFE1.gif γ _{α _1 ...α _n }^{(n)} equiv As[γ α _1 ...γ α _n ] is an antisymmetrized product of γ matrices. Therefore, a multiplicatively renormalizable complete model must include all such vertices, and the calcultion of the γ-matrix factors of its diagrams is a rather complicated problem. An effective technique for such calculations is proposed here. Its main elements are the realization of the γ matrices by the operators of a fermionic free field, transition to generating functions and functionals, the use of various functional forms of Wick's theorem, and reduction of the general d-dimensional problem to the case d=1. The general method is illustrated by specific calculations of the γ factors of one- and two-loop diagrams with arbitrary set of vertices γ (n) ⊗γ (n) .

  5. Magnetic susceptibility of the QCD vacuum in a nonlocal SU(3) Polyakov-Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Pagura, V. P.; Gómez Dumm, D.; Noguera, S.; Scoccola, N. N.

    2016-09-01

    The magnetic susceptibility of the QCD vacuum is analyzed in the framework of a nonlocal SU(3) Polyakov-Nambu-Jona-Lasinio model. Considering two different model parametrizations, we estimate the values of the u - and s -quark tensor coefficients and magnetic susceptibilities and then we extend the analysis to finite temperature systems. Our numerical results are compared to those obtained in other theoretical approaches and in lattice QCD calculations.

  6. New Two-Body Regularization

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2007-01-01

    We present a new scheme to regularize a three-dimensional two-body problem under perturbations. It is a combination of Sundman's time transformation and Levi-Civita's spatial coordinate transformation applied to the two-dimensional components of the position and velocity vectors in the osculating orbital plane. We adopt a coordinate triad specifying the plane as a function of the orbital angular momentum vector only. Since the magnitude of the orbital angular momentum is explicitly computed from the in-the-plane components of the position and velocity vectors, only two components of the orbital angular momentum vector are to be determined. In addition to these, we select the total energy of the two-body system and the physical time as additional components of the new variables. The equations of motion of the new variables have no singularity even when the mutual distance is extremely small, and therefore, the new variables are suitable to deal with close encounters. As a result, the number of dependent variables in the new scheme becomes eight, which is significantly smaller than the existing schemes to avoid close encounters: two less than the Kustaanheimo-Stiefel and the Bürdet-Ferrandiz regularizations, and five less than the Sperling-Bürdet/Bürdet-Heggie regularization.

  7. Thermodynamics and quark susceptibilities: A Monte Carlo approach to the Polyakov-Nambu-Jona-Lasinio model

    SciTech Connect

    Cristoforetti, M.; Hell, T.; Klein, B.; Weise, W.

    2010-06-01

    The Monte-Carlo method is applied to the Polyakov-loop extended Nambu-Jona-Lasinio model. This leads beyond the saddle-point approximation in a mean-field calculation and introduces fluctuations around the mean fields. We study the impact of fluctuations on the thermodynamics of the model, both in the case of pure gauge theory and including two quark flavors. In the two-flavor case, we calculate the second-order Taylor expansion coefficients of the thermodynamic grand canonical partition function with respect to the quark chemical potential and present a comparison with extrapolations from lattice QCD. We show that the introduction of fluctuations produces only small changes in the behavior of the order parameters for chiral symmetry restoration and the deconfinement transition. On the other hand, we find that fluctuations are necessary in order to reproduce lattice data for the flavor nondiagonal quark susceptibilities. Of particular importance are pion fields, the contribution of which is strictly zero in the saddle point approximation.

  8. Meson properties at finite temperature in a three flavor nonlocal chiral quark model with Polyakov loop

    SciTech Connect

    Contrera, G. A.; Dumm, D. Gomez; Scoccola, Norberto N.

    2010-03-01

    We study the finite temperature behavior of light scalar and pseudoscalar meson properties in the context of a three-flavor nonlocal chiral quark model. The model includes mixing with active strangeness degrees of freedom, and takes care of the effect of gauge interactions by coupling the quarks with the Polyakov loop. We analyze the chiral restoration and deconfinement transitions, as well as the temperature dependence of meson masses, mixing angles and decay constants. The critical temperature is found to be T{sub c{approx_equal}}202 MeV, in better agreement with lattice results than the value recently obtained in the local SU(3) PNJL model. It is seen that above T{sub c} pseudoscalar meson masses get increased, becoming degenerate with the masses of their chiral partners. The temperatures at which this matching occurs depend on the strange quark composition of the corresponding mesons. The topological susceptibility shows a sharp decrease after the chiral transition, signalling the vanishing of the U(1){sub A} anomaly for large temperatures.

  9. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  10. Continuum regularization of gauge theory with fermions

    SciTech Connect

    Chan, H.S.

    1987-03-01

    The continuum regularization program is discussed in the case of d-dimensional gauge theory coupled to fermions in an arbitrary representation. Two physically equivalent formulations are given. First, a Grassmann formulation is presented, which is based on the two-noise Langevin equations of Sakita, Ishikawa and Alfaro and Gavela. Second, a non-Grassmann formulation is obtained by regularized integration of the matter fields within the regularized Grassmann system. Explicit perturbation expansions are studied in both formulations, and considerable simplification is found in the integrated non-Grassmann formalism.

  11. Gauge equivalence in two-dimensional gravity

    SciTech Connect

    Fujiwara, T. ); Igarashi, Y. ); Kubo, J. ); Tabei, T. )

    1993-08-15

    Two-dimensional quantum gravity is identified as a second-class system which we convert into a first-class system via the Batalin-Fradkin (BF) procedure. Using the extended phase space method, we then formulate the theory in the most general class of gauges. The conformal gauge action suggested by David, Distler, and Kawai is derived from first principles. We find a local, light-cone gauge action whose Becchi-Rouet-Stora-Tyutin invariance implies Polyakov's curvature equation [partial derivative][sub [minus

  12. Nonlocal Polyakov-Nambu-Jona-Lasinio model with wave function renormalization at finite temperature and chemical potential

    SciTech Connect

    Contrera, G. A.; Orsaria, M.; Scoccola, N. N.

    2010-09-01

    We study the phase diagram of strongly interacting matter in the framework of a nonlocal SU(2) chiral quark model which includes wave function renormalization and coupling to the Polyakov loop. Both nonlocal interactions based on the frequently used exponential form factor, and on fits to the quark mass and renormalization functions obtained in lattice calculations are considered. Special attention is paid to the determination of the critical points, both in the chiral limit and at finite quark mass. In particular, we study the position of the critical end point as well as the value of the associated critical exponents for different model parametrizations.

  13. Regular gravitational lagrangians

    NASA Astrophysics Data System (ADS)

    Dragon, Norbert

    1992-02-01

    The Einstein action with vanishing cosmological constant is for appropriate field content the unique local action which is regular at the fixed point of affine coordinate transformations. Imposing this regularity requirement one excludes also Wess-Zumino counterterms which trade gravitational anomalies for Lorentz anomalies. One has to expect dilatational and SL (D) anomalies. If these anomalies are absent and if the regularity of the quantum vertex functional can be controlled then Einstein gravity is renormalizable. On leave of absence from Institut für Theoretische Physik, Universität Hannover, W-3000 Hannover 1, FRG.

  14. 2+1 flavor Polyakov Nambu Jona-Lasinio model at finite temperature and nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Fu, Wei-Jie; Zhang, Zhao; Liu, Yu-Xin

    2008-01-01

    We extend the Polyakov-loop improved Nambu Jona-Lasinio model to 2+1 flavor case to study the chiral and deconfinement transitions of strongly interacting matter at finite temperature and nonzero chemical potential. The Polyakov loop, the chiral susceptibility of light quarks (u and d), and the strange quark number susceptibility as functions of temperature at zero chemical potential are determined and compared with the recent results of lattice QCD simulations. We find that there is always an inflection point in the curve of strange quark number susceptibility accompanying the appearance of the deconfinement phase, which is consistent with the result of lattice QCD simulations. Predictions for the case at nonzero chemical potential and finite temperature are made as well. We give the phase diagram in terms of the chemical potential and temperature and find that the critical end point moves down to low temperature and finally disappears with the decrease of the strength of the ’t Hooft flavor-mixing interaction.

  15. Coupling regularizes individual units in noisy populations.

    PubMed

    Ly, Cheng; Ermentrout, G Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators. PMID:20365403

  16. Coupling regularizes individual units in noisy populations

    NASA Astrophysics Data System (ADS)

    Ly, Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  17. Coupling regularizes individual units in noisy populations

    SciTech Connect

    Ly Cheng; Ermentrout, G. Bard

    2010-01-15

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  18. Regular phantom black holes.

    PubMed

    Bronnikov, K A; Fabris, J C

    2006-06-30

    We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.

  19. Regular transport dynamics produce chaotic travel times

    NASA Astrophysics Data System (ADS)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  20. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  1. Seeking a Regularity.

    ERIC Educational Resources Information Center

    Sokol, William

    This autoinstructional unit deals with the phenomena of regularity in chemical behavior. The prerequisites suggested are two other autoinstructional lessons (Experiments 1 and 2) identified in the Del Mod System as SE 018 020 and SE 018 023. The equipment needed is listed and 45 minutes is the suggested time allotment. The Student Guide includes…

  2. Mesonic correlation functions at finite temperature and density in the Nambu-Jona-Lasinio model with a Polyakov loop

    SciTech Connect

    Hansen, H.; Alberico, W. M.; Molinari, A.; Nardi, M.; Beraudo, A.

    2007-03-15

    We investigate the properties of scalar and pseudoscalar mesons at finite temperature and quark chemical potential in the framework of the Nambu-Jona-Lasinio (NJL) model coupled to the Polyakov loop (PNJL model) with the aim of taking into account features of both chiral symmetry breaking and deconfinement. The mesonic correlators are obtained by solving the Schwinger-Dyson equation in the RPA approximation with the Hartree (mean field) quark propagator at finite temperature and density. In the phase of broken chiral symmetry, a narrower width for the {sigma} meson is obtained with respect to the NJL case; on the other hand, the pion still behaves as a Goldstone boson. When chiral symmetry is restored, the pion and {sigma} spectral functions tend to merge. The Mott temperature for the pion is also computed.

  3. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  4. Perturbations in a regular bouncing universe

    SciTech Connect

    Battefeld, T.J.; Geshnizjani, G.

    2006-03-15

    We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.

  5. Numerical Comparison of Two-Body Regularizations

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2007-06-01

    We numerically compare four schemes to regularize a three-dimensional two-body problem under perturbations: the Sperling-Bürdet (S-B), Kustaanheimo-Stiefel (K-S), and Bürdet-Ferrandiz (B-F) regularizations, and a three-dimensional extension of the Levi-Civita (L-C) regularization we developed recently. As for the integration time of the equation of motion, the least time is needed for the unregularized treatment, followed by the K-S, the extended L-C, the B-F, and the S-B regularizations. However, these differences become significantly smaller when the time to evaluate perturbations becomes dominant. As for the integration error after one close encounter, the K-S and the extended L-C regularizations are tied for the least error, followed by the S-B, the B-F, and finally the unregularized scheme for unperturbed orbits with eccentricity less than 2. This order is not changed significantly by various kinds of perturbations. As for the integration error of elliptical orbits after multiple orbital periods, the situation remains the same except for the rank of the S-B scheme, which varies from the best to the second worst depending on the length of integration and/or on the nature of perturbations. Also, we confirm that Kepler energy scaling enhances the performance of the unregularized, K-S, and extended L-C schemes. As a result, the K-S and the extended L-C regularizations with Kepler energy scaling provide the best cost performance in integrating almost all the perturbed two-body problems.

  6. A novel combined regularization algorithm of total variation and Tikhonov regularization for open electrical impedance tomography.

    PubMed

    Liu, Jinzhen; Ling, Lin; Li, Gang

    2013-07-01

    A Tikhonov regularization method in the inverse problem of electrical impedance tomography (EIT) often results in a smooth distribution reconstruction, with which we can barely make a clear separation between the inclusions and background. The recently popular total variation (TV)regularization method including the lagged diffusivity (LD) method can sharpen the edges, and is robust to noise in a small convergence region. Therefore, in this paper, we propose a novel regularization method combining the Tikhonov and LD regularization methods. Firstly, we clarify the implementation details of the Tikhonov, LD and combined methods in two-dimensional open EIT by performing the current injection and voltage measurement on one boundary of the imaging object. Next, we introduce a weighted parameter to the Tikhonov regularization method aiming to explore the effect of the weighted parameter on the resolution and quality of reconstruction images with the inclusion at different depths. Then, we analyze the performance of these algorithms with noisy data. Finally, we evaluate the effect of the current injection pattern on reconstruction quality and propose a modified current injection pattern.The results indicate that the combined regularization algorithm with stable convergence is able to improve the reconstruction quality with sharp contrast and more robust to noise in comparison to the Tikhonov and LD regularization methods solely. In addition, the results show that the current injection pattern with a bigger driver angle leads to a better reconstruction quality.

  7. Regularized versus non-regularized statistical reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Denisova, N. V.

    2011-08-01

    An important feature of positron emission tomography (PET) and single photon emission computer tomography (SPECT) is the stochastic property of real clinical data. Statistical algorithms such as ordered subset-expectation maximization (OSEM) and maximum a posteriori (MAP) are a direct consequence of the stochastic nature of the data. The principal difference between these two algorithms is that OSEM is a non-regularized approach, while the MAP is a regularized algorithm. From the theoretical point of view, reconstruction problems belong to the class of ill-posed problems and should be considered using regularization. Regularization introduces an additional unknown regularization parameter into the reconstruction procedure as compared with non-regularized algorithms. However, a comparison of non-regularized OSEM and regularized MAP algorithms with fixed regularization parameters has shown very minor difference between reconstructions. This problem is analyzed in the present paper. To improve the reconstruction quality, a method of local regularization is proposed based on the spatially adaptive regularization parameter. The MAP algorithm with local regularization was tested in reconstruction of the Hoffman brain phantom.

  8. Mainstreaming the Regular Classroom Student.

    ERIC Educational Resources Information Center

    Kahn, Michael

    The paper presents activities, suggested by regular classroom teachers, to help prepare the regular classroom student for mainstreaming. The author points out that regular classroom children need a vehicle in which curiosity, concern, interest, fear, attitudes and feelings can be fully explored, where prejudices can be dispelled, and where the…

  9. Learning regularized LDA by clustering.

    PubMed

    Pang, Yanwei; Wang, Shuang; Yuan, Yuan

    2014-12-01

    As a supervised dimensionality reduction technique, linear discriminant analysis has a serious overfitting problem when the number of training samples per class is small. The main reason is that the between- and within-class scatter matrices computed from the limited number of training samples deviate greatly from the underlying ones. To overcome the problem without increasing the number of training samples, we propose making use of the structure of the given training data to regularize the between- and within-class scatter matrices by between- and within-cluster scatter matrices, respectively, and simultaneously. The within- and between-cluster matrices are computed from unsupervised clustered data. The within-cluster scatter matrix contributes to encoding the possible variations in intraclasses and the between-cluster scatter matrix is useful for separating extra classes. The contributions are inversely proportional to the number of training samples per class. The advantages of the proposed method become more remarkable as the number of training samples per class decreases. Experimental results on the AR and Feret face databases demonstrate the effectiveness of the proposed method.

  10. Dimensional Reduction and Hadronic Processes

    SciTech Connect

    Signer, Adrian; Stoeckinger, Dominik

    2008-11-23

    We consider the application of regularization by dimensional reduction to NLO corrections of hadronic processes. The general collinear singularity structure is discussed, the origin of the regularization-scheme dependence is identified and transition rules to other regularization schemes are derived.

  11. A dynamic phase-field model for structural transformations and twinning: Regularized interfaces with transparent prescription of complex kinetics and nucleation. Part I: Formulation and one-dimensional characterization

    NASA Astrophysics Data System (ADS)

    Agrawal, Vaibhav; Dayal, Kaushik

    2015-12-01

    The motion of microstructural interfaces is important in modeling twinning and structural phase transformations. Continuum models fall into two classes: sharp-interface models, where interfaces are singular surfaces; and regularized-interface models, such as phase-field models, where interfaces are smeared out. The former are challenging for numerical solutions because the interfaces need to be explicitly tracked, but have the advantage that the kinetics of existing interfaces and the nucleation of new interfaces can be transparently and precisely prescribed. In contrast, phase-field models do not require explicit tracking of interfaces, thereby enabling relatively simple numerical calculations, but the specification of kinetics and nucleation is both restrictive and extremely opaque. This prevents straightforward calibration of phase-field models to experiment and/or molecular simulations, and breaks the multiscale hierarchy of passing information from atomic to continuum. Consequently, phase-field models cannot be confidently used in dynamic settings. This shortcoming of existing phase-field models motivates our work. We present the formulation of a phase-field model - i.e., a model with regularized interfaces that do not require explicit numerical tracking - that allows for easy and transparent prescription of complex interface kinetics and nucleation. The key ingredients are a re-parametrization of the energy density to clearly separate nucleation from kinetics; and an evolution law that comes from a conservation statement for interfaces. This enables clear prescription of nucleation - through the source term of the conservation law - and kinetics - through a distinct interfacial velocity field. A formal limit of the kinetic driving force recovers the classical continuum sharp-interface driving force, providing confidence in both the re-parametrized energy and the evolution statement. We present some 1D calculations characterizing the formulation; in a

  12. Some results on the spectra of strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Vieira, Luís António de Almeida; Mano, Vasco Moço

    2016-06-01

    Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.

  13. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    PubMed

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  14. Quaternion regularization and stabilization of perturbed central motion. II

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    1993-04-01

    Generalized regular quaternion equations for the three-dimensional two-body problem in terms of Kustaanheimo-Stiefel variables are obtained within the framework of the quaternion theory of regularizing and stabilizing transformations of the Newtonian equations for perturbed central motion. Regular quaternion equations for perturbed central motion of a material point in a central field with a certain potential Pi are also derived in oscillatory and normal forms. In addition, systems of perturbed central motion equations are obtained which include quaternion equations of perturbed orbit orientations in oscillatory or normal form, and a generalized Binet equation is derived. A comparative analysis of the equations is carried out.

  15. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  16. Regularization in radio tomographic imaging

    NASA Astrophysics Data System (ADS)

    Sundaram, Ramakrishnan; Martin, Richard; Anderson, Christopher

    2013-05-01

    This paper demonstrates methods to select and apply regularization to the linear least-squares model formulation of the radio tomographic imaging (RTI) problem. Typically, the RTI inverse problem of image reconstruction is ill-conditioned due to the extremely small singular values of the weight matrix which relates the link signal strengths to the voxel locations of the obstruction. Regularization is included to offset the non-invertible nature of the weight matrix by adding a regularization term such as the matrix approximation of derivatives in each dimension based on the difference operator. This operation yields a smooth least-squares solution for the measured data by suppressing the high energy or noise terms in the derivative of the image. Traditionally, a scalar weighting factor of the regularization matrix is identified by trial and error (adhoc) to yield the best fit of the solution to the data without either excessive smoothing or ringing oscillations at the boundaries of the obstruction. This paper proposes new scalar and vector regularization methods that are automatically computed based on the weight matrix. Evidence of the effectiveness of these methods compared to the preset scalar regularization method is presented for stationary and moving obstructions in an RTI wireless sensor network. The variation of the mean square reconstruction error as a function of the scalar regularization is calculated for known obstructions in the network. The vector regularization procedure based on selective updates to the singular values of the weight matrix attains the lowest mean square error.

  17. Regularly timed events amid chaos

    NASA Astrophysics Data System (ADS)

    Blakely, Jonathan N.; Cooper, Roy M.; Corron, Ned J.

    2015-11-01

    We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events.

  18. Fractional norm regularization: learning with very few relevant features.

    PubMed

    Kaban, Ata

    2013-06-01

    Learning in the presence of a large number of irrelevant features is an important problem in high-dimensional tasks. Previous studies have shown that L1-norm regularization can be effective in such cases while L2-norm regularization is not. Furthermore, work in compressed sensing suggests that regularization by nonconvex (e.g., fractional) semi-norms may outperform L1-regularization. However, for classification it is largely unclear when this may or may not be the case. In addition, the nonconvex problem is harder to solve than the convex L1 problem. In this paper, we provide a more in-depth analysis to elucidate the potential advantages and pitfalls of nonconvex regularization in the context of logistic regression where the regularization term employs the family of Lq semi-norms. First, using results from the phenomenon of concentration of norms and distances in high dimensions, we gain intuition about the working of sparse estimation when the dimensionality is very high. Second, using the probably approximately correct (PAC)-Bayes methodology, we give a data-dependent bound on the generalization error of Lq-regularized logistic regression, which is applicable to any algorithm that implements this model, and may be used to predict its generalization behavior from the training set alone. Third, we demonstrate the usefulness of our approach by experiments and applications, where the PAC-Bayes bound is used to guide the choice of semi-norm in the regularization term. The results support the conclusion that the optimal choice of regularization depends on the relative fraction of relevant versus irrelevant features, and a fractional norm with a small exponent is most suitable when the fraction of relevant features is very small.

  19. Rotating regular black hole solution

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    Based on the Newman-Janis algorithm, the Ayón-Beato-García spacetime metric [Phys. Rev. Lett. 80, 5056 (1998)] of the regular spherically symmetric, static, and charged black hole has been converted into rotational form. It is shown that the derived solution for rotating a regular black hole is regular and the critical value of the electric charge for which two horizons merge into one sufficiently decreases in the presence of the nonvanishing rotation parameter a of the black hole.

  20. NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION

    SciTech Connect

    CHARTRAND, RICK

    2007-01-16

    The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.

  1. On a regularity criterion for the Navier-Stokes equations involving gradient of one velocity component

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Pokorný, Milan

    2009-12-01

    We improve the regularity criterion for the incompressible Navier-Stokes equations in the full three-dimensional space involving the gradient of one velocity component. The method is based on recent results of Cao and Titi [see "Regularity criteria for the three dimensional Navier-Stokes equations," Indiana Univ. Math. J. 57, 2643 (2008)] and Kukavica and Ziane [see "Navier-Stokes equations with regularity in one direction," J. Math. Phys. 48, 065203 (2007)]. In particular, for s ɛ[2,3], we get that the solution is regular if ∇u3ɛLt(0,T;Ls(R3)), 2/t+3/s≤23/12.

  2. Geometric continuum regularization of quantum field theory

    SciTech Connect

    Halpern, M.B. . Dept. of Physics)

    1989-11-08

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.

  3. Boundary Regularity in Variational Problems

    NASA Astrophysics Data System (ADS)

    Kristensen, Jan; Mingione, Giuseppe

    2010-11-01

    We prove that, if {u : Ω subset mathbb{R}^n to mathbb{R}^N} is a solution to the Dirichlet variational problem mathop minlimitswint_{Ω} F(x, w, Dw) dx quad {subject to} quad w equiv u_0 onpartial Ω, involving a regular boundary datum ( u 0, ∂Ω) and a regular integrand F( x, w, Dw) strongly convex in Dw and satisfying suitable growth conditions, then {{mathcal H}^{n-1}} -almost every boundary point is regular for u in the sense that Du is Hölder continuous in a relative neighborhood of the point. The existence of even one such regular boundary point was previously not known except for some very special cases treated by J ost & M eier (Math Ann 262:549-561, 1983). Our results are consequences of new up-to-the-boundary higher differentiability results that we establish for minima of the functionals in question. The methods also allow us to improve the known boundary regularity results for solutions to non-linear elliptic systems, and, in some cases, to improve the known interior singular sets estimates for minimizers. Moreover, our approach allows for a treatment of systems and functionals with “rough” coefficients belonging to suitable Sobolev spaces of fractional order.

  4. Regularization Analysis of SAR Superresolution

    SciTech Connect

    DELAURENTIS,JOHN M.; DICKEY,FRED M.

    2002-04-01

    Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. In a previous report the application of the concept to synthetic aperture radar was investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. This work treats the problem from the standpoint of regularization. Both the operator inversion approach and the regularization approach show that the ability to superresolve SAR imagery is severely limited by system noise.

  5. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  6. Regular languages, regular grammars and automata in splicing systems

    NASA Astrophysics Data System (ADS)

    Mohamad Jan, Nurhidaya; Fong, Wan Heng; Sarmin, Nor Haniza

    2013-04-01

    Splicing system is known as a mathematical model that initiates the connection between the study of DNA molecules and formal language theory. In splicing systems, languages called splicing languages refer to the set of double-stranded DNA molecules that may arise from an initial set of DNA molecules in the presence of restriction enzymes and ligase. In this paper, some splicing languages resulted from their respective splicing systems are shown. Since all splicing languages are regular, languages which result from the splicing systems can be further investigated using grammars and automata in the field of formal language theory. The splicing language can be written in the form of regular languages generated by grammar. Besides that, splicing systems can be accepted by automata. In this research, two restriction enzymes are used in splicing systems namely BfuCI and NcoI.

  7. Grouping pursuit through a regularization solution surface *

    PubMed Central

    Shen, Xiaotong; Huang, Hsin-Cheng

    2010-01-01

    Summary Extracting grouping structure or identifying homogenous subgroups of predictors in regression is crucial for high-dimensional data analysis. A low-dimensional structure in particular–grouping, when captured in a regression model, enables to enhance predictive performance and to facilitate a model's interpretability Grouping pursuit extracts homogenous subgroups of predictors most responsible for outcomes of a response. This is the case in gene network analysis, where grouping reveals gene functionalities with regard to progression of a disease. To address challenges in grouping pursuit, we introduce a novel homotopy method for computing an entire solution surface through regularization involving a piecewise linear penalty. This nonconvex and overcomplete penalty permits adaptive grouping and nearly unbiased estimation, which is treated with a novel concept of grouped subdifferentials and difference convex programming for efficient computation. Finally, the proposed method not only achieves high performance as suggested by numerical analysis, but also has the desired optimality with regard to grouping pursuit and prediction as showed by our theoretical results. PMID:20689721

  8. Distributional Stress Regularity: A Corpus Study

    ERIC Educational Resources Information Center

    Temperley, David

    2009-01-01

    The regularity of stress patterns in a language depends on "distributional stress regularity", which arises from the pattern of stressed and unstressed syllables, and "durational stress regularity", which arises from the timing of syllables. Here we focus on distributional regularity, which depends on three factors. "Lexical stress patterning"…

  9. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  10. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  11. Statistical regularities reduce perceived numerosity.

    PubMed

    Zhao, Jiaying; Yu, Ru Qi

    2016-01-01

    Numerical information can be perceived at multiple levels (e.g., one bird, or a flock of birds). The level of input has typically been defined by explicit grouping cues, such as contours or connecting lines. Here we examine how regularities of object co-occurrences shape numerosity perception in the absence of explicit grouping cues. Participants estimated the number of colored circles in an array. We found that estimates were lower in arrays containing colors that consistently appeared next to each other across the experiment, even though participants were not explicitly aware of the color pairs (Experiments 1a and 1b). To provide support for grouping, we introduced color duplicates and found that estimates were lower in arrays with two identical colors (Experiment 2). The underestimation could not be explained by increased attention to individual objects (Experiment 3). These results suggest that statistical regularities reduce perceived numerosity consistent with a grouping mechanism. PMID:26451701

  12. Deep learning regularized Fisher mappings.

    PubMed

    Wong, W K; Sun, Mingming

    2011-10-01

    For classification tasks, it is always desirable to extract features that are most effective for preserving class separability. In this brief, we propose a new feature extraction method called regularized deep Fisher mapping (RDFM), which learns an explicit mapping from the sample space to the feature space using a deep neural network to enhance the separability of features according to the Fisher criterion. Compared to kernel methods, the deep neural network is a deep and nonlocal learning architecture, and therefore exhibits more powerful ability to learn the nature of highly variable datasets from fewer samples. To eliminate the side effects of overfitting brought about by the large capacity of powerful learners, regularizers are applied in the learning procedure of RDFM. RDFM is evaluated in various types of datasets, and the results reveal that it is necessary to apply unsupervised regularization in the fine-tuning phase of deep learning. Thus, for very flexible models, the optimal Fisher feature extractor may be a balance between discriminative ability and descriptive ability.

  13. [Iterated Tikhonov Regularization for Spectral Recovery from Tristimulus].

    PubMed

    Xie, De-hong; Li, Rui; Wan, Xiao-xia; Liu, Qiang; Zhu, Wen-feng

    2016-01-01

    Reflective spectra in a multispectral image can objectively and originally represent color information due to their high dimensionality, illuminant independent and device independent. Aiming to the problem of loss of spectral information when the spectral data reconstructed from three-dimensional colorimetric data in the trichromatic camera-based spectral image acquisition system and its subsequent problem of loss of color information, this work proposes an iterated Tikhonov regularization to reconstruct the reflectance spectra. First of all, according to relationship between the colorimetric value and the reflective spectra in the colorimetric theory, this work constructs a spectral reconstruction equation which can reconstruct high dimensional spectral data from three dimensional colorimetric data acquired by the trichromatic camera. Then, the iterated Tikhonov regularization, inspired by the idea of the pseudo inverse Moore-Penrose, is used to cope with the linear ill-posed inverse problem during solving the equation of reconstructing reflectance spectra. Meanwhile, the work also uses the L-curve method to obtain an optimal regularized parameter of the iterated Tikhonov regularization by training a set of samples. Through these methods, the ill condition of the spectral reconstruction equation can be effectively controlled and improved, and subsequently loss of spectral information of the reconstructed spectral data can be reduced. The verification experiment is performed under another set of training samples. The experimental results show that the proposed method reconstructs the reflective spectra with less spectral information loss in the trichromatic camera-based spectral image acquisition system, which reflects in obvious decreases of spectral errors and colorimetric errors compared with the previous method.

  14. Curved branes with regular support

    NASA Astrophysics Data System (ADS)

    Antoniadis, Ignatios; Cotsakis, Spiros; Klaoudatou, Ifigeneia

    2016-09-01

    We study spacetime singularities in a general five-dimensional braneworld with curved branes satisfying four-dimensional maximal symmetry. The bulk is supported by an analog of perfect fluid with the time replaced by the extra coordinate. We show that contrary to the existence of finite-distance singularities from the brane location in any solution with flat (Minkowski) branes, in the case of curved branes there are singularity-free solutions for a range of equations of state compatible with the null energy condition.

  15. Knowledge and regularity in planning

    NASA Technical Reports Server (NTRS)

    Allen, John A.; Langley, Pat; Matwin, Stan

    1992-01-01

    The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.

  16. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  17. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  18. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  19. Natural frequency of regular basins

    NASA Astrophysics Data System (ADS)

    Tjandra, Sugih S.; Pudjaprasetya, S. R.

    2014-03-01

    Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.

  20. Regularized degenerate multi-solitons

    NASA Astrophysics Data System (ADS)

    Correa, Francisco; Fring, Andreas

    2016-09-01

    We report complex PT-symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.

  1. Regularized degenerate multi-solitons

    NASA Astrophysics Data System (ADS)

    Correa, Francisco; Fring, Andreas

    2016-09-01

    We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.

  2. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  3. Wave dynamics of regular and chaotic rays

    SciTech Connect

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space.

  4. Bayesian regularization of neural networks.

    PubMed

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  5. Nondissipative Velocity and Pressure Regularizations for the ICON Model

    NASA Astrophysics Data System (ADS)

    Restelli, M.; Giorgetta, M.; Hundertmark, T.; Korn, P.; Reich, S.

    2009-04-01

    formulation can be extended to the regularized systems retaining discrete conservation of mass and potential enstrophy. We also present some numerical results both in planar, doubly periodic geometry and in spherical geometry. These results show that our numerical formulation correctly approximates the behavior of the regularized models, and are a first step toward the use of the regularization idea within a complete, three-dimensional GCM. References [BR05] L. Bonaventura and T. Ringler. Analysis of discrete shallow-water models on geodesic Delaunay grids with C-type staggering. Mon. Wea. Rev., 133(8):2351-2373, August 2005. [HHPW08] M.W. Hecht, D.D. Holm, M.R. Petersen, and B.A. Wingate. Implementation of the LANS-α turbulence model in a primitive equation ocean model. J. Comp. Phys., 227(11):5691-5716, May 2008. [RWS07] S. Reich, N. Wood, and A. Staniforth. Semi-implicit methods, nonlinear balance, and regularized equations. Atmos. Sci. Lett., 8(1):1-6, 2007.

  6. Stochastic regularization operators on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan

    2016-04-01

    Most geophysical inverse problems require the solution of underdetermined systems of equations. In order to solve such inverse problems, appropriate regularization is required. Ideally, this regularization includes information on the expected model variability and spatial correlation. Based on geostatistical covariance functions, which can be adapted to the specific situation, stochastic regularization can be used to add auxiliary constraints to the given inverse problem. Stochastic regularization operators have been successfully applied to geophysical inverse problems formulated on regular grids. Here, we demonstrate the calculation of stochastic regularization operators for unstructured meshes. Unstructured meshes are advantageous with regards to incorporating arbitrary topography, undulating geological interfaces and complex acquisition geometries into the inversion. However, compared to regular grids, unstructured meshes have variable cell sizes, complicating the calculation of stochastic operators. The stochastic operators proposed here are based on a 2D exponential correlation function, allowing to predefine spatial correlation lengths. The regularization thus acts over an imposed correlation length rather than only taking into account neighbouring cells as in regular smoothing constraints. Correlation over a spatial length partly removes the effects of variable cell sizes of unstructured meshes on the regularization. Synthetic models having large-scale interfaces as well as small-scale stochastic variations are used to analyse the performance and behaviour of the stochastic regularization operators. The resulting inverted models obtained with stochastic regularization are compare against the results of standard regularization approaches (damping and smoothing). Besides using stochastic operators for regularization, we plan to incorporate the footprint of the stochastic operator in further applications such as the calculation of the cross-gradient functions

  7. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.

  8. Emergent criticality and Friedan scaling in a two-dimensional frustrated Heisenberg antiferromagnet

    NASA Astrophysics Data System (ADS)

    Orth, Peter P.; Chandra, Premala; Coleman, Piers; Schmalian, Jörg

    2014-03-01

    We study a two-dimensional frustrated Heisenberg antiferromagnet on the windmill lattice consisting of triangular and dual honeycomb lattice sites. In the classical ground state, the spins on different sublattices are decoupled, but quantum and thermal fluctuations drive the system into a coplanar state via an "order from disorder" mechanism. We obtain the finite temperature phase diagram using renormalization group approaches. In the coplanar regime, the relative U(1) phase between the spins on the two sublattices decouples from the remaining degrees of freedom, and is described by a six-state clock model with an emergent critical phase. At lower temperatures, the system enters a Z6 broken phase with long-range phase correlations. We derive these results by two distinct renormalization group approaches to two-dimensional magnetism: Wilson-Polyakov scaling and Friedan's geometric approach to nonlinear sigma models where the scaling of the spin stiffnesses is governed by the Ricci flow of a 4D metric tensor.

  9. Deconvolution of axisymmetric flame properties using Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Daun, Kyle J.; Thomson, Kevin A.; Liu, Fengshan; Smallwood, Greg J.

    2006-07-01

    We present a method based on Tikhonov regularization for solving one-dimensional inverse tomography problems that arise in combustion applications. In this technique, Tikhonov regularization transforms the ill-conditioned set of equations generated by onion-peeling deconvolution into a well-conditioned set that is less susceptible to measurement errors that arise in experimental settings. The performance of this method is compared to that of onion-peeling and Abel three-point deconvolution by solving for a known field variable distribution from projected data contaminated with an artificially generated error. The results show that Tikhonov deconvolution provides a more accurate field distribution than onion-peeling and Abel three-point deconvolution and is more stable than the other two methods as the distance between projected data points decreases.

  10. Perfect state transfer over distance-regular spin networks

    SciTech Connect

    Jafarizadeh, M. A.; Sufiani, R.

    2008-02-15

    Christandl et al. have noted that the d-dimensional hypercube can be projected to a linear chain with d+1 sites so that, by considering fixed but different couplings between the qubits assigned to the sites, the perfect state transfer (PST) can be achieved over arbitrarily long distances in the chain [Phys. Rev. Lett. 92, 187902 (2004); Phys. Rev. A 71, 032312 (2005)]. In this work we consider distance-regular graphs as spin networks and note that any such network (not just the hypercube) can be projected to a linear chain and so can allow PST over long distances. We consider some particular spin Hamiltonians which are the extended version of those of Christandl et al. Then, by using techniques such as stratification of distance-regular graphs and spectral analysis methods, we give a procedure for finding a set of coupling constants in the Hamiltonians so that a particular state initially encoded on one site will evolve freely to the opposite site without any dynamical control, i.e., we show how to derive the parameters of the system so that PST can be achieved. It is seen that PST is only allowed in distance-regular spin networks for which, starting from an arbitrary vertex as reference vertex (prepared in the initial state which we wish to transfer), the last stratum of the networks with respect to the reference state contains only one vertex; i.e., stratification of these networks plays an important role which determines in which kinds of networks and between which vertices of them, PST can be allowed. As examples, the cycle network with even number of vertices and d-dimensional hypercube are considered in details and the method is applied for some important distance-regular networks.

  11. Manifestly scale-invariant regularization and quantum effective operators

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2016-05-01

    Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.

  12. Regular attractors and nonautonomous perturbations of them

    SciTech Connect

    Vishik, Marko I; Zelik, Sergey V; Chepyzhov, Vladimir V

    2013-01-31

    We study regular global attractors of dissipative dynamical semigroups with discrete or continuous time and we investigate attractors for nonautonomous perturbations of such semigroups. The main theorem states that the regularity of global attractors is preserved under small nonautonomous perturbations. Moreover, nonautonomous regular global attractors remain exponential and robust. We apply these general results to model nonautonomous reaction-diffusion systems in a bounded domain of R{sup 3} with time-dependent external forces. Bibliography: 22 titles.

  13. Elementary Particle Spectroscopy in Regular Solid Rewrite

    NASA Astrophysics Data System (ADS)

    Trell, Erik

    2008-10-01

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it "is the likely keystone of a fundamental computational foundation" also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)×O(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  14. Elementary Particle Spectroscopy in Regular Solid Rewrite

    SciTech Connect

    Trell, Erik

    2008-10-17

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  15. State-Space Regularization: Geometric Theory

    SciTech Connect

    Chavent, G.; Kunisch, K.

    1998-05-15

    Regularization of nonlinear ill-posed inverse problems is analyzed for a class of problems that is characterized by mappings which are the composition of a well-posed nonlinear and an ill-posed linear mapping. Regularization is carried out in the range of the nonlinear mapping. In applications this corresponds to the state-space variable of a partial differential equation or to preconditioning of data. The geometric theory of projection onto quasi-convex sets is used to analyze the stabilizing properties of this regularization technique and to describe its asymptotic behavior as the regularization parameter tends to zero.

  16. Digital image correlation involves an inverse problem: A regularization scheme based on subset size constraint

    NASA Astrophysics Data System (ADS)

    Zhan, Qin; Yuan, Yuan; Fan, Xiangtao; Huang, Jianyong; Xiong, Chunyang; Yuan, Fan

    2016-06-01

    Digital image correlation (DIC) is essentially implicated in a class of inverse problem. Here, a regularization scheme is developed for the subset-based DIC technique to effectively inhibit potential ill-posedness that likely arises in actual deformation calculations and hence enhance numerical stability, accuracy and precision of correlation measurement. With the aid of a parameterized two-dimensional Butterworth window, a regularized subpixel registration strategy is established, in which the amount of speckle information introduced to correlation calculations may be weighted through equivalent subset size constraint. The optimal regularization parameter associated with each individual sampling point is determined in a self-adaptive way by numerically investigating the curve of 2-norm condition number of coefficient matrix versus the corresponding equivalent subset size, based on which the regularized solution can eventually be obtained. Numerical results deriving from both synthetic speckle images and actual experimental images demonstrate the feasibility and effectiveness of the set of newly-proposed regularized DIC algorithms.

  17. Wall and antiwall in the Randall-Sundrum model and a new infrared regularization

    NASA Astrophysics Data System (ADS)

    Ichinose, Shoichi

    2002-04-01

    An approach to finding the field equation solution of the Randall-Sundrum model with the S1/Z2 extra axis is presented. We closely examine the infrared singularity. The vacuum is set by the five-dimensional Higgs boson field. Both the domain wall and the anti-domain-wall naturally appear, at the ends of the extra compact axis, by taking a new infrared regularization. The solution is considered to be stable by the kink boundary condition. A continuous (infrared-) regularized solution, which is a truncated Fourier series of a discontinuous solution, is utilized. The ultraviolet-infrared relation appears in the regularized solution.

  18. Higher spin black holes in three dimensions: Remarks on asymptotics and regularity

    NASA Astrophysics Data System (ADS)

    Bañados, Máximo; Canto, Rodrigo; Theisen, Stefan

    2016-07-01

    In the context of (2 +1 )-dimensional S L (N ,R )×S L (N ,R ) Chern-Simons theory we explore issues related to regularity and asymptotics on the solid torus, for stationary and circularly symmetric solutions. We display and solve all necessary conditions to ensure a regular metric and metriclike higher spin fields. We prove that holonomy conditions are necessary but not sufficient conditions to ensure regularity, and that Hawking conditions do not necessarily follow from them. Finally we give a general proof that once the chemical potentials are turned on—as demanded by regularity—the asymptotics cannot be that of Brown-Henneaux.

  19. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  20. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... advances without approval of the NCUA Board for a period of six months after becoming a member. This subsection shall not apply to any credit union which becomes a Regular member of the Facility within six... member of the Facility at any time within six months prior to becoming a Regular member of the Facility....

  1. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-04-01

    Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.

  2. On regularizations of the Dirac delta distribution

    NASA Astrophysics Data System (ADS)

    Hosseini, Bamdad; Nigam, Nilima; Stockie, John M.

    2016-01-01

    In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions SH to a singular term S as a parameter H (associated with the support size of SH) shrinks to zero. We characterize this convergence in both the weak-* topology of distributions and a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.

  3. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    SciTech Connect

    Gao, J. M.; Liu, Y.; Li, W.; Lu, J.; Dong, Y. B.; Xia, Z. W.; Yi, P.; Yang, Q. W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.

  4. Interior Regularity Estimates in High Conductivity Homogenization and Application

    NASA Astrophysics Data System (ADS)

    Briane, Marc; Capdeboscq, Yves; Nguyen, Luc

    2013-01-01

    In this paper, uniform pointwise regularity estimates for the solutions of conductivity equations are obtained in a unit conductivity medium reinforced by an ɛ-periodic lattice of highly conducting thin rods. The estimates are derived only at a distance ɛ 1+ τ (for some τ > 0) away from the fibres. This distance constraint is rather sharp since the gradients of the solutions are shown to be unbounded locally in L p as soon as p > 2. One key ingredient is the derivation in dimension two of regularity estimates to the solutions of the equations deduced from a Fourier series expansion with respect to the fibres' direction, and weighted by the high-contrast conductivity. The dependence on powers of ɛ of these two-dimensional estimates is shown to be sharp. The initial motivation for this work comes from imaging, and enhanced resolution phenomena observed experimentally in the presence of micro-structures (L erosey et al., Science 315:1120-1124, 2007). We use these regularity estimates to characterize the signature of low volume fraction heterogeneities in the fibred reinforced medium, assuming that the heterogeneities stay at a distance ɛ 1+ τ away from the fibres.

  5. Manifold regularized non-negative matrix factorization with label information

    NASA Astrophysics Data System (ADS)

    Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin

    2016-03-01

    Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.

  6. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  7. Quantitative regularities in floodplain formation

    NASA Astrophysics Data System (ADS)

    Nevidimova, O.

    2009-04-01

    Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface

  8. Image reconstruction based on L1 regularization and projection methods for electrical impedance tomography.

    PubMed

    Wang, Qi; Wang, Huaxiang; Zhang, Ronghua; Wang, Jinhai; Zheng, Yu; Cui, Ziqiang; Yang, Chengyi

    2012-10-01

    Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction in EIT is a nonlinear and ill-posed inverse problem. The Tikhonov method with L(2) regularization is always used to solve the EIT problem. However, the L(2) method always smoothes the sharp changes or discontinue areas of the reconstruction. Image reconstruction using the L(1) regularization allows addressing this difficulty. In this paper, a sum of absolute values is substituted for the sum of squares used in the L(2) regularization to form the L(1) regularization, the solution is obtained by the barrier method. However, the L(1) method often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive. In this paper, the projection method is combined with the L(1) regularization method to reduce the computational cost. The L(1) problem is mainly solved in the coarse subspace. This paper also discusses the strategies of choosing parameters. Both simulation and experimental results of the L(1) regularization method were compared with the L(2) regularization method, indicating that the L(1) regularization method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages. Furthermore, the projected L(1) method can also effectively reduce the computational time without affecting the quality of reconstructed images.

  9. Regular black holes and noncommutative geometry inspired fuzzy sources

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinpei

    2016-05-01

    We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the space-time dimensions, and the existence of a void in the vicinity of the center of the space-time is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it.

  10. Topology-based hexahedral regular meshing for wave propagation

    NASA Astrophysics Data System (ADS)

    Fousse, Allan; Bertrand, Yves; Rodrigues, Dominique

    2000-10-01

    Numeric simulations allow the study of physical phenomenon that are impossible or difficult to realize in the real world. As an example, it is not conceivable to cause an atomic explosion or an earthquake for exploring the effects on a building or a flood barrier. To be realistic, this kind of simulation (waves propagation), must take into account all the characteristics of the domain where it takes place, and more particularly the tri-dimensional aspect. Therefore, numericians need not only a three-dimensional model of the domain, but also a meshing of this domain. In the case we use finite differences based methods, this meshing must be hexahedral and regular. Moreover, new developments on the numerical propagation code provides tools for using meshes that interpolate the interior subdivisions of the domain. However, the manual generation of this kind of meshing is a long and difficult process. This is why to improve and simplify this work, we propose a semi-automatic algorithm based on a block subdivision. It makes use of the dissociation between the geometrical and topological aspects. Indeed, with our topological model a regular hexahedral meshing is far easier to generate. This meshing geometry can be supplied by a geometric model, with reconstruction, interpolation or parameterization methods, but it is anyway completely guided by the topological model. The result is a software presently used by the Commissariat a` l'Energie Atomique in several full-size studies, and notably for the framework of the Comprehensive Test Ban Treaty.

  11. Parallel Communicating Grammar Systems with Regular Control

    NASA Astrophysics Data System (ADS)

    Pardubská, Dana; Plátek, Martin; Otto, Friedrich

    Parallel communicating grammar systems with regular control (RPCGS, for short) are introduced, which are obtained from returning regular parallel communicating grammar systems by restricting the derivations that are executed in parallel by the various components through a regular control language. For the class of languages that are generated by RPCGSs with constant communication complexity we derive a characterization in terms of a restricted type of freely rewriting restarting automaton. From this characterization we obtain that these languages are semi-linear, and that centralized RPCGSs with constant communication complexity are of the same generative power as non-centralized RPCGSs with constant communication complexity.

  12. Regularization schemes and the multiplicative anomaly

    NASA Astrophysics Data System (ADS)

    Evans, T. S.

    1999-06-01

    Elizalde, Vanzo, and Zerbini have shown that the effective action of two free Euclidean scalar fields in flat space contains a `multiplicative anomaly' when ζ-function regularization is used. This is related to the Wodzicki residue. I show that there is no anomaly when using a wide range of other regularization schemes and that the anomaly can be removed by an unusual choice of renormalization scales. I define new types of anomalies and show that they have similar properties. Thus multiplicative anomalies encode no novel physics. They merely illustrate some dangerous aspects of ζ-function and Schwinger proper time regularization schemes.

  13. Analysis of regularized inversion of data corrupted by white Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-04-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is \\begin{eqnarray*} m(x) = Au(x) + \\delta \\varepsilon (x), \\end{eqnarray*} where δ > 0 is the noise magnitude. If ɛ was an L2-function, Tikhonov regularization gives an estimate \\begin{eqnarray*} T_\\alpha (m) = \\mathop {{arg\\, min}}_{u\\in H^r} \\big \\lbrace \\Vert A u-m\\Vert _{L^2}^2+ \\alpha \\Vert u\\Vert _{H^r}^2 \\big \\rbrace \\end{eqnarray*} for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm \\Vert u\\Vert _{H^r} covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L2, but do belong to Hs with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed.

  14. The L(1/2) regularization approach for survival analysis in the accelerated failure time model.

    PubMed

    Chai, Hua; Liang, Yong; Liu, Xiao-Ying

    2015-09-01

    The analysis of high-dimensional and low-sample size microarray data for survival analysis of cancer patients is an important problem. It is a huge challenge to select the significantly relevant bio-marks from microarray gene expression datasets, in which the number of genes is far more than the size of samples. In this article, we develop a robust prediction approach for survival time of patient by a L(1/2) regularization estimator with the accelerated failure time (AFT) model. The L(1/2) regularization could be seen as a typical delegate of L(q)(0regularization methods and it has shown many attractive features. In order to optimize the problem of the relevant gene selection in high-dimensional biological data, we implemented the L(1/2) regularized AFT model by the coordinate descent algorithm with a renewed half thresholding operator. The results of the simulation experiment showed that we could obtain more accurate and sparse predictor for survival analysis by the L(1/2) regularized AFT model compared with other L1 type regularization methods. The proposed procedures are applied to five real DNA microarray datasets to efficiently predict the survival time of patient based on a set of clinical prognostic factors and gene signatures.

  15. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  16. Parallelization of irregularly coupled regular meshes

    NASA Technical Reports Server (NTRS)

    Chase, Craig; Crowley, Kay; Saltz, Joel; Reeves, Anthony

    1992-01-01

    Regular meshes are frequently used for modeling physical phenomena on both serial and parallel computers. One advantage of regular meshes is that efficient discretization schemes can be implemented in a straight forward manner. However, geometrically-complex objects, such as aircraft, cannot be easily described using a single regular mesh. Multiple interacting regular meshes are frequently used to describe complex geometries. Each mesh models a subregion of the physical domain. The meshes, or subdomains, can be processed in parallel, with periodic updates carried out to move information between the coupled meshes. In many cases, there are a relatively small number (one to a few dozen) subdomains, so that each subdomain may also be partitioned among several processors. We outline a composite run-time/compile-time approach for supporting these problems efficiently on distributed-memory machines. These methods are described in the context of a multiblock fluid dynamics problem developed at LaRC.

  17. Regular Exercise: Antidote for Deadly Diseases?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160326.html Regular Exercise: Antidote for Deadly Diseases? High levels of physical ... Aug. 9, 2016 (HealthDay News) -- Getting lots of exercise may reduce your risk for five common diseases, ...

  18. Mixed-Norm Regularization for Brain Decoding

    PubMed Central

    Flamary, R.; Jrad, N.; Phlypo, R.; Congedo, M.; Rakotomamonjy, A.

    2014-01-01

    This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly. PMID:24860614

  19. Continuum regularization of quantum field theory

    SciTech Connect

    Bern, Z.

    1986-01-01

    Breit, Gupta, and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the fifth-time of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, difficulties arise which, in general, ruins the scheme. A successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest.

  20. Bayesian Methods for High Dimensional Linear Models

    PubMed Central

    Mallick, Himel; Yi, Nengjun

    2013-01-01

    In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow’s Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions. PMID:24511433

  1. Bayesian Methods for High Dimensional Linear Models.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2013-06-01

    In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow's Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions.

  2. Regular black holes in f (R ) gravity coupled to nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel E.; Junior, Ednaldo L. B.; Marques, Glauber T.; Zanchin, Vilson T.

    2016-07-01

    We obtain a class of regular black hole solutions in four-dimensional f (R ) gravity, R being the curvature scalar, coupled to a nonlinear electromagnetic source. The metric formalism is used and static spherically symmetric spacetimes are assumed. The resulting f (R ) and nonlinear electrodynamics functions are characterized by a one-parameter family of solutions which are generalizations of known regular black holes in general relativity coupled to nonlinear electrodynamics. The related regular black holes of general relativity are recovered when the free parameter vanishes, in which case one has f (R )∝R . We analyze the regularity of the solutions and also show that there are particular solutions that violate only the strong energy condition.

  3. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  4. Analysis of a Regularized Bingham Model with Pressure-Dependent Yield Stress

    NASA Astrophysics Data System (ADS)

    El Khouja, Nazek; Roquet, Nicolas; Cazacliu, Bogdan

    2015-12-01

    The goal of this article is to provide some essential results for the solution of a regularized viscoplastic frictional flow model adapted from the extensive mathematical analysis of the Bingham model. The Bingham model is a standard for the description of viscoplastic flows and it is widely used in many application areas. However, wet granular viscoplastic flows necessitate the introduction of additional non-linearities and coupling between velocity and stress fields. This article proposes a step toward a frictional coupling, characterized by a dependence of the yield stress to the pressure field. A regularized version of this viscoplastic frictional model is analysed in the framework of stationary flows. Existence, uniqueness and regularity are investigated, as well as finite-dimensional and algorithmic approximations. It is shown that the model can be solved and approximated as far as a frictional parameter is small enough. Getting similar results for the non-regularized model remains an issue. Numerical investigations are postponed to further works.

  5. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  6. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.

  7. Strong regularizing effect of integrable systems

    SciTech Connect

    Zhou, Xin

    1997-11-01

    Many time evolution problems have the so-called strong regularization effect, that is, with any irregular initial data, as soon as becomes greater than 0, the solution becomes C{sup {infinity}} for both spacial and temporal variables. This paper studies 1 x 1 dimension integrable systems for such regularizing effect. In the work by Sachs, Kappler [S][K], (see also earlier works [KFJ] and [Ka]), strong regularizing effect is proved for KdV with rapidly decaying irregular initial data, using the inverse scattering method. There are two equivalent Gel`fand-Levitan-Marchenko (GLM) equations associated to an inverse scattering problem, one is normalized at x = {infinity} and another at x = {infinity}. The method of [S][K] relies on the fact that the KdV waves propagate only in one direction and therefore one of the two GLM equations remains normalized and can be differentiated infinitely many times. 15 refs.

  8. Shadow of rotating regular black holes

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.

    2016-05-01

    We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.

  9. Regular homotopy for immersions of graphs into surfaces

    NASA Astrophysics Data System (ADS)

    Permyakov, D. A.

    2016-06-01

    We study invariants of regular immersions of graphs into surfaces up to regular homotopy. The concept of the winding number is used to introduce a new simple combinatorial invariant of regular homotopy. Bibliography: 20 titles.

  10. REGULAR VERSUS DIFFUSIVE PHOTOSPHERIC FLUX CANCELLATION

    SciTech Connect

    Litvinenko, Yuri E.

    2011-04-20

    Observations of photospheric flux cancellation on the Sun imply that cancellation can be a diffusive rather than regular process. A criterion is derived, which quantifies the parameter range in which diffusive photospheric cancellation should occur. Numerical estimates show that regular cancellation models should be expected to give a quantitatively accurate description of photospheric cancellation. The estimates rely on a recently suggested scaling for a turbulent magnetic diffusivity, which is consistent with the diffusivity measurements on spatial scales varying by almost two orders of magnitude. Application of the turbulent diffusivity to large-scale dispersal of the photospheric magnetic flux is discussed.

  11. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  12. Regularizing the divergent structure of light-front currents

    SciTech Connect

    Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2001-04-01

    The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.

  13. The geometric β-function in curved space-time under operator regularization

    SciTech Connect

    Agarwala, Susama

    2015-06-15

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined.

  14. Bose and Fermi statistics and the regularization of the nonrelativistic Jacobian for the scale anomaly

    NASA Astrophysics Data System (ADS)

    Lin, Chris L.; Ordóñez, Carlos R.

    2016-10-01

    We regulate in Euclidean space the Jacobian under scale transformations for two-dimensional nonrelativistic fermions and bosons interacting via contact interactions and compare the resulting scaling anomalies. For fermions, Grassmannian integration inverts the Jacobian; however, this effect is canceled by the regularization procedure and a result similar to that of bosons is attained. We show the independence of the result with respect to the regulating function, and show the robustness of our methods by comparing the procedure with an effective potential method using both cutoff and ζ -function regularization.

  15. Functional calculus and *-regularity of a class of Banach algebras II

    NASA Astrophysics Data System (ADS)

    Leung, Chi-Wai; Ng, Chi-Keung

    2006-10-01

    In this article, we define a natural Banach *-algebra for a C*-dynamical system (A,G,[alpha]) which is slightly bigger than L1(G;A) (they are the same if A is finite-dimensional). We will show that this algebra is *-regular if G has polynomial growth. The main result in this article extends the two main results in [C.W. Leung, C.K. Ng, Functional calculus and *-regularity of a class of Banach algebras, Proc. Amer. Math. Soc., in press].

  16. A Quantitative Measure of Memory Reference Regularity

    SciTech Connect

    Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A

    2001-10-01

    The memory performance of applications on existing architectures depends significantly on hardware features like prefetching and caching that exploit the locality of the memory accesses. The principle of locality has guided the design of many key micro-architectural features, including cache hierarchies, TLBs, and branch predictors. Quantitative measures of spatial and temporal locality have been useful for predicting the performance of memory hierarchy components. Unfortunately, the concept of locality is constrained to capturing memory access patterns characterized by proximity, while sophisticated memory systems are capable of exploiting other predictable access patterns. Here, we define the concepts of spatial and temporal regularity, and introduce a measure of spatial access regularity to quantify some of this predictability in access patterns. We present an efficient online algorithm to dynamically determine the spatial access regularity in an application's memory references, and demonstrate its use on a set of regular and irregular codes. We find that the use of our algorithm, with its associated overhead of trace generation, slows typical applications by a factor of 50-200, which is at least an order of magnitude better than traditional full trace generation approaches. Our approach can be applied to the characterization of program access patterns and in the implementation of sophisticated, software-assisted prefetching mechanisms, and its inherently parallel nature makes it well suited for use with multi-threaded programs.

  17. Dyslexia in Regular Orthographies: Manifestation and Causation

    ERIC Educational Resources Information Center

    Wimmer, Heinz; Schurz, Matthias

    2010-01-01

    This article summarizes our research on the manifestation of dyslexia in German and on cognitive deficits, which may account for the severe reading speed deficit and the poor orthographic spelling performance that characterize dyslexia in regular orthographies. An only limited causal role of phonological deficits (phonological awareness,…

  18. Strategies of Teachers in the Regular Classroom

    ERIC Educational Resources Information Center

    De Leeuw, Renske Ria; De Boer, Anke Aaltje

    2016-01-01

    It is known that regular schoolteachers have difficulties in educating students with social, emotional and behavioral difficulties (SEBD), mainly because of their disruptive behavior. In order to manage the disruptive behavior of students with SEBD many advices and strategies are provided in educational literature. However, very little is known…

  19. TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION.

    PubMed

    Meerschaert, M M; Scheffler, H-P

    2013-04-01

    Karamata's Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated.

  20. TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION

    PubMed Central

    MEERSCHAERT, M. M.; SCHEFFLER, H.-P.

    2013-01-01

    Karamata’s Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated. PMID:24644367

  1. Regular Classroom Teachers' Perceptions of Mainstreaming Effects.

    ERIC Educational Resources Information Center

    Ringlaben, Ravic P.; Price, Jay R.

    To assess regular classroom teachers' perceptions of mainstreaming, a 22 item questionnaire was completed by 117 teachers (K through 12). Among results were that nearly half of the Ss indicated a lack of preparation for implementing mainstreaming; 47% tended to be very willing to accept aminstreamed students; 42% said mainstreaming was working…

  2. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  3. Learning regular expressions for clinical text classification

    PubMed Central

    Bui, Duy Duc An; Zeng-Treitler, Qing

    2014-01-01

    Objectives Natural language processing (NLP) applications typically use regular expressions that have been developed manually by human experts. Our goal is to automate both the creation and utilization of regular expressions in text classification. Methods We designed a novel regular expression discovery (RED) algorithm and implemented two text classifiers based on RED. The RED+ALIGN classifier combines RED with an alignment algorithm, and RED+SVM combines RED with a support vector machine (SVM) classifier. Two clinical datasets were used for testing and evaluation: the SMOKE dataset, containing 1091 text snippets describing smoking status; and the PAIN dataset, containing 702 snippets describing pain status. We performed 10-fold cross-validation to calculate accuracy, precision, recall, and F-measure metrics. In the evaluation, an SVM classifier was trained as the control. Results The two RED classifiers achieved 80.9–83.0% in overall accuracy on the two datasets, which is 1.3–3% higher than SVM's accuracy (p<0.001). Similarly, small but consistent improvements have been observed in precision, recall, and F-measure when RED classifiers are compared with SVM alone. More significantly, RED+ALIGN correctly classified many instances that were misclassified by the SVM classifier (8.1–10.3% of the total instances and 43.8–53.0% of SVM's misclassifications). Conclusions Machine-generated regular expressions can be effectively used in clinical text classification. The regular expression-based classifier can be combined with other classifiers, like SVM, to improve classification performance. PMID:24578357

  4. A comprehensive methodology for algorithm characterization, regularization and mapping into optimal VLSI arrays

    SciTech Connect

    Barada, H.R.

    1989-01-01

    This dissertation provides a fairly comprehensive treatment of a broad class of algorithms as it pertains to systolic implementation. The authors describe some formal algorithmic transformations that can be utilized to map regular and some irregular compute-bound algorithms into the beat fit time-optimal systolic architectures. The resulted architectures can be one-dimensional, two-dimensional, three-dimensional or nonplanar. The methodology detailed in the dissertation employs, like other methods, the concept of dependence vector to order, in space and time, the index points representing the algorithm. However, by differentiating between two types of dependence vectors, the ordering procedure is allowed to be flexible and time optimal. Furthermore, unlike other methodologies, the approach reported here does not put constraints on the topology or dimensionality of the target architecture. The ordered index points are represented by nodes in a diagram called Systolic Precedence Diagram (SPD). The SPD is a form of precedence graph that takes into account the systolic operation requirements of strictly local communications and regular data flow. Therefore, any algorithm with variable dependence vectors has to be transformed into a regular indexed set of computations with local dependencies. This can be done by replacing variable dependence vectors with sets of fixed dependence vectors. The SPD is transformed into an acyclic, labeled, directed graph called the Systolic Directed Graph (SDG). The SDG models the data flow as well as the timing for the execution of the given algorithm on a time-optimal array.

  5. Group Sparsity Regularization for Calibration of SubsurfaceFlow Models under Geologic Uncertainty

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.

    2014-12-01

    Subsurface flow model calibration inverse problems typically involve inference of high-dimensional aquifer properties from limited monitoring and performance data. To find plausible solutions, the dynamic flow and pressure data are augmented with prior geological information about the unknown properties. Specifically, geologic continuity that exhibits itself as strong spatial correlation in heterogeneous rock properties has motivated various regularization and parameterization techniques for solving ill-posed model calibration inverse problems. However, complex geologic formations, such as fluvial facies distribution, are not amenable to generic regularization techniques; hence, more specific prior models about the shape and connectivity of the underlying geologic patterns are necessary for constraining the solution properly. Inspired by recent advances in signal processing, sparsity regularization uses effective basis functions to compactly represent complex geologic patterns for efficient model calibration. Here, we present a novel group-sparsity regularization that can discriminate between alternative plausible prior models based on the dynamic response data. This regularization property is used to select prior models that better reconstruct the complex geo-spatial connectivity during calibration. With group sparsity, the dominant spatial connectivity patterns are encoded into several parameter groups where each group is tuned to represent certain types of geologic patterns. In the model calibration process, dynamic flow and pressure data are used to select a small subset of groups to estimate aquifer properties. We demonstrate the effectiveness of the group sparsity regularization for solving ill-posed model calibration inverse problems.

  6. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  7. Sparse High Dimensional Models in Economics.

    PubMed

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2011-09-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  8. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  9. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  10. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  11. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  12. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...

  13. Regularity of nuclear structure under random interactions

    SciTech Connect

    Zhao, Y. M.

    2011-05-06

    In this contribution I present a brief introduction to simplicity out of complexity in nuclear structure, specifically, the regularity of nuclear structure under random interactions. I exemplify such simplicity by two examples: spin-zero ground state dominance and positive parity ground state dominance in even-even nuclei. Then I discuss two recent results of nuclear structure in the presence of random interactions, in collaboration with Prof. Arima. Firstly I discuss sd bosons under random interactions, with the focus on excited states in the yrast band. We find a few regular patterns in these excited levels. Secondly I discuss our recent efforts towards obtaining eigenvalues without diagonalizing the full matrices of the nuclear shell model Hamiltonian.

  14. Charged fermions tunneling from regular black holes

    SciTech Connect

    Sharif, M. Javed, W.

    2012-11-15

    We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.

  15. Tracking magnetogram proper motions by multiscale regularization

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.

    1995-01-01

    Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.

  16. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  17. A regular version of Smilansky model

    SciTech Connect

    Barseghyan, Diana; Exner, Pavel

    2014-04-15

    We discuss a modification of Smilansky model in which a singular potential “channel” is replaced by a regular, below unbounded potential which shrinks as it becomes deeper. We demonstrate that, similarly to the original model, such a system exhibits a spectral transition with respect to the coupling constant, and determine the critical value above which a new spectral branch opens. The result is generalized to situations with multiple potential “channels.”.

  18. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  19. A regularization approach to hydrofacies delineation

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  20. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  1. Charge-regularization effects on polyelectrolytes

    NASA Astrophysics Data System (ADS)

    Muthukumar, Murugappan

    2012-02-01

    When electrically charged macromolecules are dispersed in polar solvents, their effective net charge is generally different from their chemical charges, due to competition between counterion adsorption and the translational entropy of dissociated counterions. The effective charge changes significantly as the experimental conditions change such as variations in solvent quality, temperature, and the concentration of added small electrolytes. This charge-regularization effect leads to major difficulties in interpreting experimental data on polyelectrolyte solutions and challenges in understanding the various polyelectrolyte phenomena. Even the most fundamental issue of experimental determination of molar mass of charged macromolecules by light scattering method has been difficult so far due to this feature. We will present a theory of charge-regularization of flexible polyelectrolytes in solutions and discuss the consequences of charge-regularization on (a) experimental determination of molar mass of polyelectrolytes using scattering techniques, (b) coil-globule transition, (c) macrophase separation in polyelectrolyte solutions, (c) phase behavior in coacervate formation, and (d) volume phase transitions in polyelectrolyte gels.

  2. Automatic detection of regularly repeating vocalizations

    NASA Astrophysics Data System (ADS)

    Mellinger, David

    2005-09-01

    Many animal species produce repetitive sounds at regular intervals. This regularity can be used for automatic recognition of the sounds, providing improved detection at a given signal-to-noise ratio. Here, the detection of sperm whale sounds is examined. Sperm whales produce highly repetitive ``regular clicks'' at periods of about 0.2-2 s, and faster click trains in certain behavioral contexts. The following detection procedure was tested: a spectrogram was computed; values within a certain frequency band were summed; time windowing was applied; each windowed segment was autocorrelated; and the maximum of the autocorrelation within a certain periodicity range was chosen. This procedure was tested on sets of recordings containing sperm whale sounds and interfering sounds, both low-frequency recordings from autonomous hydrophones and high-frequency ones from towed hydrophone arrays. An optimization procedure iteratively varies detection parameters (spectrogram frame length and frequency range, window length, periodicity range, etc.). Performance of various sets of parameters was measured by setting a standard level of allowable missed calls, and the resulting optimium parameters are described. Performance is also compared to that of a neural network trained using the data sets. The method is also demonstrated for sounds of blue whales, minke whales, and seismic airguns. [Funding from ONR.

  3. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  4. Regularized robust coding for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Yang, Jian; Zhang, David

    2013-05-01

    Recently the sparse representation based classification (SRC) has been proposed for robust face recognition (FR). In SRC, the testing image is coded as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1 -norm of the coding residual. Such a sparse coding model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective enough to describe the coding residual in practical FR systems. Meanwhile, the sparsity constraint on the coding coefficients makes the computational cost of SRC very high. In this paper, we propose a new face coding model, namely regularized robust coding (RRC), which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively independent and identically distributed, the RRC seeks for a maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding (IR(3)C) algorithm is proposed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting, and expression changes, etc.

  5. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  6. Polynomial regularization for robust MRI-based estimation of blood flow velocities and pressure gradients.

    PubMed

    Delles, Michael; Rengier, Fabian; Ley, Sebastian; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland

    2011-01-01

    In cardiovascular diagnostics, phase-contrast MRI is a valuable technique for measuring blood flow velocities and computing blood pressure values. Unfortunately, both velocity and pressure data typically suffer from the strong image noise of velocity-encoded MRI. In the past, separate approaches of regularization with physical a-priori knowledge and data representation with continuous functions have been proposed to overcome these drawbacks. In this article, we investigate polynomial regularization as an exemplary specification of combining these two techniques. We perform time-resolved three-dimensional velocity measurements and pressure gradient computations on MRI acquisitions of steady flow in a physical phantom. Results based on the higher quality temporal mean data are used as a reference. Thereby, we investigate the performance of our approach of polynomial regularization, which reduces the root mean squared errors to the reference data by 45% for velocities and 60% for pressure gradients.

  7. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  8. Regular physical exercise: way to healthy life.

    PubMed

    Siddiqui, N I; Nessa, A; Hossain, M A

    2010-01-01

    Any bodily activity or movement that enhances and maintains overall health and physical fitness is called physical exercise. Habit of regular physical exercise has got numerous benefits. Exercise is of various types such as aerobic exercise, anaerobic exercise and flexibility exercise. Aerobic exercise moves the large muscle groups with alternate contraction and relaxation, forces to deep breath, heart to pump more blood with adequate tissue oxygenation. It is also called cardiovascular exercise. Examples of aerobic exercise are walking, running, jogging, swimming etc. In anaerobic exercise, there is forceful contraction of muscle with stretching, usually mechanically aided and help to build up muscle strength and muscle bulk. Examples are weight lifting, pulling, pushing, sprinting etc. Flexibility exercise is one type of stretching exercise to improve the movements of muscles, joints and ligaments. Walking is a good example of aerobic exercise, easy to perform, safe, effective, does not require any training or equipment and less chance of injury. Regular 30 minutes brisk walking in the morning with 150 minutes per week is a good exercise. Regular exercise improves the cardiovascular status, reduces the risk of cardiac disease, high blood pressure and cerebrovascular disease. It reduces body weight, improves insulin sensitivity, helps in glycemic control, prevents obesity and diabetes mellitus. It is helpful for relieving anxiety, stress, brings a sense of well being and overall physical fitness. Global trend is mechanization, labor savings and leading to epidemic of long term chronic diseases like diabetes mellitus, cardiovascular diseases etc. All efforts should be made to create public awareness promoting physical activity, physically demanding recreational pursuits and providing adequate facilities. PMID:20046192

  9. Total-variation regularization with bound constraints

    SciTech Connect

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  10. The regular state in higher order gravity

    NASA Astrophysics Data System (ADS)

    Cotsakis, Spiros; Kadry, Seifedine; Trachilis, Dimitrios

    2016-08-01

    We consider the higher-order gravity theory derived from the quadratic Lagrangian R + 𝜖R2 in vacuum as a first-order (ADM-type) system with constraints, and build time developments of solutions of an initial value formulation of the theory. We show that all such solutions, if analytic, contain the right number of free functions to qualify as general solutions of the theory. We further show that any regular analytic solution which satisfies the constraints and the evolution equations can be given in the form of an asymptotic formal power series expansion.

  11. New Regularization Method for EXAFS Analysis

    NASA Astrophysics Data System (ADS)

    Reich, Tatiana Ye.; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry; Reich, Tobias

    2007-02-01

    As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4ṡ2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.

  12. New Regularization Method for EXAFS Analysis

    SciTech Connect

    Reich, Tatiana Ye.; Reich, Tobias; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry

    2007-02-02

    As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4{center_dot}2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.

  13. Regular systems of inbreeding with mutation.

    PubMed

    Campbell, R B

    1988-08-01

    Probability of identity by type is studied for regular systems of inbreeding in the presence of mutation. Analytic results are presented for half-sib mating, first cousin mating, and half nth cousin mating under both infinite allele and two allele (back mutation) models. Reasonable rates of mutation do not provide significantly different results from probability of identity by descent in the absence of mutation. Homozygosity is higher under half-sib mating than under first cousin mating, but the expected number of copies of a gene in the population is higher under first cousin mating than under half-sib mating.

  14. Multichannel image regularization using anisotropic geodesic filtering

    SciTech Connect

    Grazzini, Jacopo A

    2010-01-01

    This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.

  15. Explicit solutions of one-dimensional total variation problem

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2015-09-01

    This work deals with denosing of a one-dimensional signal corrupted by additive white Gaussian noise. A common way to solve the problem is to utilize the total variation (TV) method. Basically, the TV regularization minimizes a functional consisting of the sum of fidelity and regularization terms. We derive explicit solutions of the one-dimensional TV regularization problem that help us to restore noisy signals with a direct, non-iterative algorithm. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of noisy signals.

  16. An efficient, advanced regularized inversion method for highly parameterized environmental models

    NASA Astrophysics Data System (ADS)

    Skahill, B. E.; Baggett, J. S.

    2008-12-01

    The Levenberg-Marquardt method of computer based parameter estimation can be readily modified in cases of high parameter insensitivity and correlation by the inclusion of various regularization devices to maintain numerical stability and robustness, including; for example, Tikhonov regularization and truncated singular value decomposition. With Tikhonov regularization, where parameters or combinations of parameters cannot be uniquely estimated, they are provided with values or assigned relationships with other parameters that are decreed to be realistic by the modeler. Tikhonov schemes provide a mechanism for assimilation of valuable "outside knowledge" into the inversion process, with the result that parameter estimates, thus informed by a modeler's expertise, are more suitable for use in the making of important predictions by that model than would otherwise be the case. However, by maintaining the high dimensionality of the adjustable parameter space, they can potentially be computational burdensome. Moreover, while Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. We will present results associated with development efforts that include an accelerated Levenberg-Marquardt local search algorithm adapted for Tikhonov regularization, and a technique which allows relative regularization weights to be estimated as parameters through the calibration process itself (Doherty and Skahill, 2006). This new method, encapsulated in the MICUT software (Skahill et al., 2008) will be compared, in terms of efficiency and enforcement of regularization relationships, with the SVD Assist method (Tonkin and Doherty, 2005) contained in the popular PEST package by considering various watershed

  17. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  18. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  19. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  20. The Regularity of Optimal Irrigation Patterns

    NASA Astrophysics Data System (ADS)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 < α < 1. The aim of this paper is to prove the regularity of paths (rivers, branches,...) when the irrigated measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  1. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  2. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  3. Hawking fluxes and anomalies in rotating regular black holes with a time-delay

    NASA Astrophysics Data System (ADS)

    Takeuchi, Shingo

    2016-11-01

    Based on the anomaly cancellation method we compute the Hawking fluxes (the Hawking thermal flux and the total flux of energy-momentum tensor) from a four-dimensional rotating regular black hole with a time-delay. To this purpose, in the three metrics proposed in [1], we try to perform the dimensional reduction in which the anomaly cancellation method is feasible at the near-horizon region in a general scalar field theory. As a result we can demonstrate that the dimensional reduction is possible in two of those metrics. Hence we perform the anomaly cancellation method and compute the Hawking fluxes in those two metrics. Our Hawking fluxes involve three effects: (1) quantum gravity effect regularizing the core of the black holes, (2) rotation of the black hole, (3) time-delay. Further in this paper toward the metric in which the dimensional could not be performed, we argue that it would be some problematic metric, and mention its cause. The Hawking fluxes we compute in this study could be considered to correspond to more realistic Hawking fluxes. Further what Hawking fluxes can be obtained from the anomaly cancellation method would be interesting in terms of the relation between a consistency of quantum field theories and black hole thermodynamics.

  4. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  5. Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2016-01-01

    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…

  6. The Essential Special Education Guide for the Regular Education Teacher

    ERIC Educational Resources Information Center

    Burns, Edward

    2007-01-01

    The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…

  7. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  8. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  9. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  10. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  11. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced... withholding (spouse annuity only), recovery of debts due the Federal government, and garnishment pursuant...

  12. Regularity of free boundaries a heuristic retro

    PubMed Central

    Caffarelli, Luis A.; Shahgholian, Henrik

    2015-01-01

    This survey concerns regularity theory of a few free boundary problems that have been developed in the past half a century. Our intention is to bring up different ideas and techniques that constitute the fundamentals of the theory. We shall discuss four different problems, where approaches are somewhat different in each case. Nevertheless, these problems can be divided into two groups: (i) obstacle and thin obstacle problem; (ii) minimal surfaces, and cavitation flow of a perfect fluid. In each case, we shall only discuss the methodology and approaches, giving basic ideas and tools that have been specifically designed and tailored for that particular problem. The survey is kept at a heuristic level with mainly geometric interpretation of the techniques and situations in hand. PMID:26261372

  13. Local orientational mobility in regular hyperbranched polymers

    NASA Astrophysics Data System (ADS)

    Dolgushev, Maxim; Markelov, Denis A.; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  14. Generalized equations of state and regular universes

    NASA Astrophysics Data System (ADS)

    Contreras, F.; Cruz, N.; González, E.

    2016-05-01

    We found non singular solutions for universes filled with a fluid which obey a Generalized Equation of State of the form P(ρ) = - Aρ + γρλ. An emergent universe is obtained if A =1 and λ = 1/2. If the matter source is reinterpret as that of a scalar matter field with some potential, the corresponding potential is derived. For a closed universe, an exact bounce solution is found for A = 1/3 and the same λ. We also explore how the composition of theses universes ean be interpreted in terms of known fluids. It is of interest to note that accelerated solutions previously found for the late time evolution also represent regular solutions at early times.

  15. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes. PMID:27575171

  16. Regularization of Motion Equations with L-Transformation and Numerical Integration of the Regular Equations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, Sergei M.

    2003-04-01

    The sets of L-matrices of the second, fourth and eighth orders are constructed axiomatically. The defining relations are taken from the regularization of motion equations for Keplerian problem. In particular, the Levi-Civita matrix and KS-matrix are L-matrices of second and fourth order, respectively. A theorem on the ranks of L-transformations of different orders is proved. The notion of L-similarity transformation is introduced, certain sets of L-matrices are constructed, and their classification is given. An application of fourth order L-matrices for N-body problem regularization is given. A method of correction for regular coordinates in the Runge-Kutta-Fehlberg integration method for regular motion equations of a perturbed two-body problem is suggested. Comparison is given for the results of numerical integration in the problem of defining the orbit of a satellite, with and without the above correction method. The comparison is carried out with respect to the number of calls to the subroutine evaluating the perturbational accelerations vector. The results of integration using the correction turn out to be in a favorable position.

  17. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  18. Regularization for Atmospheric Temperature Retrieval Problems

    NASA Technical Reports Server (NTRS)

    Velez-Reyes, Miguel; Galarza-Galarza, Ruben

    1997-01-01

    Passive remote sensing of the atmosphere is used to determine the atmospheric state. A radiometer measures microwave emissions from earth's atmosphere and surface. The radiance measured by the radiometer is proportional to the brightness temperature. This brightness temperature can be used to estimate atmospheric parameters such as temperature and water vapor content. These quantities are of primary importance for different applications in meteorology, oceanography, and geophysical sciences. Depending on the range in the electromagnetic spectrum being measured by the radiometer and the atmospheric quantities to be estimated, the retrieval or inverse problem of determining atmospheric parameters from brightness temperature might be linear or nonlinear. In most applications, the retrieval problem requires the inversion of a Fredholm integral equation of the first kind making this an ill-posed problem. The numerical solution of the retrieval problem requires the transformation of the continuous problem into a discrete problem. The ill-posedness of the continuous problem translates into ill-conditioning or ill-posedness of the discrete problem. Regularization methods are used to convert the ill-posed problem into a well-posed one. In this paper, we present some results of our work in applying different regularization techniques to atmospheric temperature retrievals using brightness temperatures measured with the SSM/T-1 sensor. Simulation results are presented which show the potential of these techniques to improve temperature retrievals. In particular, no statistical assumptions are needed and the algorithms were capable of correctly estimating the temperature profile corner at the tropopause independent of the initial guess.

  19. Black hole mimickers: Regular versus singular behavior

    SciTech Connect

    Lemos, Jose P. S.; Zaslavskii, Oleg B.

    2008-07-15

    Black hole mimickers are possible alternatives to black holes; they would look observationally almost like black holes but would have no horizon. The properties in the near-horizon region where gravity is strong can be quite different for both types of objects, but at infinity it could be difficult to discern black holes from their mimickers. To disentangle this possible confusion, we examine the near-horizon properties, and their connection with far away asymptotic properties, of some candidates to black mimickers. We study spherically symmetric uncharged or charged but nonextremal objects, as well as spherically symmetric charged extremal objects. Within the uncharged or charged but nonextremal black hole mimickers, we study nonextremal {epsilon}-wormholes on the threshold of the formation of an event horizon, of which a subclass are called black foils, and gravastars. Within the charged extremal black hole mimickers we study extremal {epsilon}-wormholes on the threshold of the formation of an event horizon, quasi-black holes, and wormholes on the basis of quasi-black holes from Bonnor stars. We elucidate whether or not the objects belonging to these two classes remain regular in the near-horizon limit. The requirement of full regularity, i.e., finite curvature and absence of naked behavior, up to an arbitrary neighborhood of the gravitational radius of the object enables one to rule out potential mimickers in most of the cases. A list ranking the best black hole mimickers up to the worst, both nonextremal and extremal, is as follows: wormholes on the basis of extremal black holes or on the basis of quasi-black holes, quasi-black holes, wormholes on the basis of nonextremal black holes (black foils), and gravastars. Since in observational astrophysics it is difficult to find extremal configurations (the best mimickers in the ranking), whereas nonextremal configurations are really bad mimickers, the task of distinguishing black holes from their mimickers seems to

  20. On the capacity of multihop slotted ALOHA networks with regular structure

    NASA Astrophysics Data System (ADS)

    Silvester, J. A.; Kleinrock, L.

    1983-08-01

    The capacity of networks with a regular structure operating under the slotted ALOHA access protocol is investigated. Circular (loop) and linear (bus) networks are first examined, followed by consideration of two-dimensional networks. For one-dimensional networks, it is found that the capacity is basically independent of the network average degree and is almost constant with respect to network size. For two-dimensional networks, it is determined that the capacity grows in proportion to the square root of the number of nodes in the network, provided that the average degree is kept small. In addition, it is found that reducing the average degree (with certain connectivity restrictions) permits a higher throughput to be achieved. Some of the peculiarities of routing in these networks are also studied.

  1. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. PMID:27513219

  2. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone.

  3. Error analysis for matrix elastic-net regularization algorithms.

    PubMed

    Li, Hong; Chen, Na; Li, Luoqing

    2012-05-01

    Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.

  4. Preparation of Regular Specimens for Atom Probes

    NASA Technical Reports Server (NTRS)

    Kuhlman, Kim; Wishard, James

    2003-01-01

    A method of preparation of specimens of non-electropolishable materials for analysis by atom probes is being developed as a superior alternative to a prior method. In comparison with the prior method, the present method involves less processing time. Also, whereas the prior method yields irregularly shaped and sized specimens, the present developmental method offers the potential to prepare specimens of regular shape and size. The prior method is called the method of sharp shards because it involves crushing the material of interest and selecting microscopic sharp shards of the material for use as specimens. Each selected shard is oriented with its sharp tip facing away from the tip of a stainless-steel pin and is glued to the tip of the pin by use of silver epoxy. Then the shard is milled by use of a focused ion beam (FIB) to make the shard very thin (relative to its length) and to make its tip sharp enough for atom-probe analysis. The method of sharp shards is extremely time-consuming because the selection of shards must be performed with the help of a microscope, the shards must be positioned on the pins by use of micromanipulators, and the irregularity of size and shape necessitates many hours of FIB milling to sharpen each shard. In the present method, a flat slab of the material of interest (e.g., a polished sample of rock or a coated semiconductor wafer) is mounted in the sample holder of a dicing saw of the type conventionally used to cut individual integrated circuits out of the wafers on which they are fabricated in batches. A saw blade appropriate to the material of interest is selected. The depth of cut and the distance between successive parallel cuts is made such that what is left after the cuts is a series of thin, parallel ridges on a solid base. Then the workpiece is rotated 90 and the pattern of cuts is repeated, leaving behind a square array of square posts on the solid base. The posts can be made regular, long, and thin, as required for samples

  5. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification. PMID:22997267

  6. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  7. Compression and regularization with the information bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, Dj; Schwab, David

    Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer.

  8. Color correction optimization with hue regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  9. Reverberation mapping by regularized linear inversion

    NASA Technical Reports Server (NTRS)

    Krolik, Julian H.; Done, Christine

    1995-01-01

    Reverberation mapping of active galactic nucleus (AGN) emission-line regions requires the numerical deconvolution of two time series. We suggest the application of a new method, regularized linear inversion, to the solution of this problem. This method possesses many good features; it imposes no restrictions on the sign of the response function; it can provide clearly defined uncertainty estimates; it involves no guesswork about unmeasured data; it can give a clear indication of when the underlying convolution model is inadequate; and it is computationally very efficient. Using simulated data, we find the minimum S/N and length of the time series in order for this method to work satisfactorily. We also define guidelines for choosing the principal tunable parameter of the method and for interpreting the results. Finally, we reanalyze published data from the 1989 NGC 5548 campaign using this new method and compare the results to those previously obtained by maximum entropy analysis. For some lines we find good agreement, but for others, especially C III lambda(1909) and Si IV lambda(1400), we find significant differences. These can be attributed to the inability of the maximum entropy method to find negative values of the response function, but also illustrate the nonuniqueness of any deconvolution technique. We also find evidence that certain line light curves (e.g., C IV lambda(1549)) cannot be fully described by the simple linear convolution model.

  10. Identifying Cognitive States Using Regularity Partitions

    PubMed Central

    2015-01-01

    Functional Magnetic Resonance (fMRI) data can be used to depict functional connectivity of the brain. Standard techniques have been developed to construct brain networks from this data; typically nodes are considered as voxels or sets of voxels with weighted edges between them representing measures of correlation. Identifying cognitive states based on fMRI data is connected with recording voxel activity over a certain time interval. Using this information, network and machine learning techniques can be applied to discriminate the cognitive states of the subjects by exploring different features of data. In this work we wish to describe and understand the organization of brain connectivity networks under cognitive tasks. In particular, we use a regularity partitioning algorithm that finds clusters of vertices such that they all behave with each other almost like random bipartite graphs. Based on the random approximation of the graph, we calculate a lower bound on the number of triangles as well as the expectation of the distribution of the edges in each subject and state. We investigate the results by comparing them to the state of the art algorithms for exploring connectivity and we argue that during epochs that the subject is exposed to stimulus, the inspected part of the brain is organized in an efficient way that enables enhanced functionality. PMID:26317983

  11. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  12. Determinants of Scanpath Regularity in Reading.

    PubMed

    von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan

    2015-09-01

    Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research.

  13. Temporal Regularity of the Environment Drives Time Perception

    PubMed Central

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be varied. In one experiment, we asked whether the last stimulus was regular or not. We found that participants exposed to an irregular environment frequently reported perfectly regularly paced stimuli to be irregular. In a second experiment, we asked participants to judge whether the final stimulus was presented before or after a flash. In this way, we were able to determine distortions in temporal perception as changes in the timing necessary for the sound and the flash to be perceived synchronous. We found that within a regular context, the perceived timing of deviant last stimuli changed so that the relative anisochrony appeared to be perceptually decreased. In the irregular context, the perceived timing of irregular stimuli following a regular sequence was not affected. These observations suggest that humans use temporal expectations to evaluate the regularity of sequences and that expectations are combined with sensory stimuli to adapt perceived timing to follow the statistics of the environment. Expectations can be seen as a-priori probabilities on which perceived timing of stimuli depend. PMID:27441686

  14. Dimensional renormalization: Ladders and rainbows

    SciTech Connect

    Delbourgo, R.; Kalloniatis, A.C.; Thompson, G.

    1996-10-01

    Renormalization factors are most easily extracted by going to the massless limit of the quantum field theory and retaining only a single momentum scale. We derive the factors and renormalized Green{close_quote}s functions to {ital all} orders in perturbation theory for rainbow graphs and vertex (or scattering) diagrams at zero momentum transfer, in the context of dimensional regularization, and we prove that the correct anomalous dimensions for those processes emerge in the limit {ital D}{r_arrow}4. {copyright} {ital 1996 The American Physical Society.}

  15. A regularization-free Young's modulus reconstruction algorithm for ultrasound elasticity imaging.

    PubMed

    Pan, Xiaochang; Gao, Jing; Shao, Jinhua; Luo, Jianwen; Bai, Jing

    2013-01-01

    Ultrasound elasticity imaging aims to reconstruct the distribution of elastic modulus (e.g., Young's modulus) within biological tissues, since the value of elastic modulus is often related to pathological changes. Currently, most elasticity imaging algorithms face a challenge of choosing the value of the regularization constant. We propose a more applicable algorithm without the need of any regularization. This algorithm is not only simple to use, but has a relatively high accuracy. Our method comprises of a nonrigid registration technique and tissue incompressibility assumption to estimate the two-dimensional (2D) displacement field, and finite element method (FEM) to reconstruct the Young's modulus distribution. Simulation and phantom experiments are performed to evaluate the algorithm. Simulation and phantom results showed that the proposed algorithm can reconstruct the Young's modulus with an accuracy of 63∼85%.

  16. Pauli-Villars regularization of field theories on the light front

    SciTech Connect

    Hiller, John R.

    2010-12-22

    Four-dimensional quantum field theories generally require regularization to be well defined. This can be done in various ways, but here we focus on Pauli-Villars (PV) regularization and apply it to nonperturbative calculations of bound states. The philosophy is to introduce enough PV fields to the Lagrangian to regulate the theory perturbatively, including preservation of symmetries, and assume that this is sufficient for the nonperturbative case. The numerical methods usually necessary for nonperturbative bound-state problems are then applied to a finite theory that has the original symmetries. The bound-state problem is formulated as a mass eigenvalue problem in terms of the light-front Hamiltonian. Applications to quantum electrodynamics are discussed.

  17. TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY

    SciTech Connect

    Crotts, Arlin P. S.

    2009-05-20

    Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: {approx}50% of reports originate from near Aristarchus, {approx}16% from Plato, {approx}6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that {approx}80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.

  18. Phase-regularized polygon computer-generated holograms.

    PubMed

    Im, Dajeong; Moon, Eunkyoung; Park, Yohan; Lee, Deokhwan; Hahn, Joonku; Kim, Hwi

    2014-06-15

    The dark-line defect problem in the conventional polygon computer-generated hologram (CGH) is addressed. To resolve this problem, we clarify the physical origin of the defect and address the concept of phase-regularization. A novel synthesis algorithm for a phase-regularized polygon CGH for generating photorealistic defect-free holographic images is proposed. The optical reconstruction results of the phase-regularized polygon CGHs without the dark-line defects are presented.

  19. Evolution and regularity results for epitaxially strained thin films and material voids

    NASA Astrophysics Data System (ADS)

    Piovano, Paolo

    In this dissertation we study free boundary problems that model the evolution of interfaces in the presence of elasticity, such as thin film profiles and material void boundaries. These problems are characterized by the competition between the elastic bulk energy and the anisotropic surface energy. First, we consider the evolution equation with curvature regularization that models the motion of a two-dimensional thin film by evaporation-condensation on a rigid substrate. The film is strained due to the mismatch between the crystalline lattices of the two materials and anisotropy is taken into account. We present the results contained in [62] where the author establishes short time existence, uniqueness and regularity of the solution using De Giorgi's minimizing movements to exploit the L2-gradient flow structure of the equation. This seems to be the first analytical result for the evaporation-condensation case in the presence of elasticity. Second, we consider the relaxed energy introduced in [20] that depends on admissible pairs (E, u) of sets E and functions u defined only outside of E. For dimension three this energy appears in the study of the material voids in solids, where the pairs (E, u) are interpreted as the admissible configurations that consist of void regions E in the space and of displacements u of the atoms of the crystal. We provide the precise mathematical framework that guarantees the existence of minimal energy pairs (E, u). Then, we establish that for every minimal configuration (E, u), the function u is C1,gloc -regular outside an essentially closed subset of E. No hypothesis of starshapedness is assumed on the voids and all the results that are contained in [18] hold true for every dimension d ≥ 2. Key Words and Sentences: surface energy, elastic bulk energy, minimizing movements, evolution, gradient flow, motion by mean curvature, minimal configurations, existence, uniqueness, regularity, partial regularity, lower density bound, thin film

  20. Effects of junctional correlations in the totally asymmetric simple exclusion process on random regular networks.

    PubMed

    Baek, Yongjoo; Ha, Meesoon; Jeong, Hawoong

    2014-12-01

    We investigate the totally asymmetric simple exclusion process on closed and directed random regular networks, which is a simple model of active transport in the one-dimensional segments coupled by junctions. By a pair mean-field theory and detailed numerical analyses, it is found that the correlations at junctions induce two notable deviations from the simple mean-field theory, which neglects these correlations: (1) the narrower range of particle density for phase coexistence and (2) the algebraic decay of density profile with exponent 1/2 even outside the maximal-current phase. We show that these anomalies are attributable to the effective slow bonds formed by the network junctions.

  1. Fragmentation processes: from irregular mud-cracks to regular polygonal patterns

    NASA Astrophysics Data System (ADS)

    Jagla, Eduardo; Rojo, Alberto

    2000-03-01

    We consider an originally irregular pattern of fractures (mud cracks like) at the surface of a half infinite media that cools (or desiccates) from its surface. The fracture pattern advances towards the interior as the material progressively cools down. We show that the tendency of the two dimensional pattern of fractures as a function of depth is to evolve smoothly to polygonal configurations that minimize a free energy functional. Our model explains the origin of regular columnar structures of polygonal section in volcanic lava flows, and also in some desiccation experiments on starches. Statistical analysis of our results compare quite well with those of lava and starch.

  2. Heavy pair production currents with general quantum numbers in dimensionally regularized nonrelativistic QCD

    SciTech Connect

    Hoang, Andre H.; Ruiz-Femenia, Pedro

    2006-12-01

    We discuss the form and construction of general color singlet heavy particle-antiparticle pair production currents for arbitrary quantum numbers, and issues related to evanescent spin operators and scheme dependences in nonrelativistic QCD in n=3-2{epsilon} dimensions. The anomalous dimensions of the leading interpolating currents for heavy quark and colored scalar pairs in arbitrary {sup 2S+1}L{sub J} angular-spin states are determined at next-to-leading order in the nonrelativistic power counting.

  3. On uniqueness of quasi-regular solutions to Protter problem for Keldish type equations

    NASA Astrophysics Data System (ADS)

    Hristov, T. D.; Popivanov, N. I.; Schneider, M.

    2013-12-01

    Some three-dimensional boundary value problems for mixed type equations of second kind are studied. Such type problems, but for mixed type equations of first kind are stated by M. Protter in the fifties. For hyperbolic-elliptic equations they are multidimensional analogue of the classical two-dimensional Morawetz-Guderley transonic problem. For hyperbolic and weakly hyperbolic equations the Protter problems are 3D analogues of Darboux or Cauchy-Goursat plane problems. In this case, in contrast of well-posedness of 2D problems, the new problems are strongly ill-posed. In this paper are given similar statement of Protter problems for equations of Keldish type, involving lower order terms. It is shown that the new problems are also ill-posed. A notion of quasi-regular solution is given and sufficient conditions for uniqueness of such solutions are found. The dependence of lower order terms is also studied.

  4. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    ERIC Educational Resources Information Center

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  5. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  6. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  7. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  8. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  9. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...

  10. 12 CFR 311.5 - Regular procedure for closing meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Regular procedure for closing meetings. 311.5... RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5 Regular... a meeting will be taken only when a majority of the entire Board votes to take such action....

  11. Reading Comprehension and Regularized Orthography. Parts 1 and 2.

    ERIC Educational Resources Information Center

    Carvell, Robert L.

    The purpose of this study was to compare mature readers' comprehension of text presented in traditional orthography with their comprehension of text presented in a regularized orthography, specifically, to determine whether, when traditional orthography is regularized, any loss of meaning is attributable to the loss of the visual dissimilarity of…

  12. Inclusion Professional Development Model and Regular Middle School Educators

    ERIC Educational Resources Information Center

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  13. 29 CFR 778.408 - The specified regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... applicable).” The word “regular” describing the rate in this provision is not to be treated as surplusage. To... agreement in the courts. In both of the two cases before it, the Supreme Court found that the relationship... rate. There is no requirement, however, that the regular rate specified be equal to the regular rate...

  14. 32 CFR 724.211 - Regularity of government affairs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Regularity of government affairs. 724.211 Section 724.211 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY PERSONNEL NAVAL DISCHARGE REVIEW BOARD Authority/Policy for Departmental Discharge Review § 724.211 Regularity of...

  15. Gait variability and regularity of people with transtibial amputations.

    PubMed

    Parker, Kim; Hanada, Ed; Adderson, James

    2013-02-01

    Gait temporal-spatial variability and step regularity as measured by trunk accelerometry, measures relevant to fall risk and mobility, have not been well studied in individuals with lower-limb amputations. The study objective was to explore the differences in gait variability and regularity between individuals with unilateral transtibial amputations due to vascular (VAS) or nonvascular (NVAS) reasons and fall history over the past year. Of the 34 individuals with trans-tibial amputations who participated, 72% of the 18 individuals with VAS and 50% of the 16 individuals with NVAS had experienced at least one fall in the past year. The incidence of falls was not significantly different between groups. Variability measures included the coefficient of variation (CV) in swing time and step length obtained from an electronic walkway. Regularity measures included anteroposterior, medial-lateral and vertical step regularity obtained from trunk accelerations. When controlling for velocity, balance confidence and time since amputation, there were no significant differences in gait variability or regularity measures between individuals with VAS and NVAS. In comparing fallers to nonfallers, no significant differences were found in gait variability or regularity measures when controlling for velocity and balance confidence. Vertical step regularity (p=0.026) was found to be the only significant parameter related to fall history, while it only had poor to fair discriminatory ability related to fall history. There is some indication that individuals who have experienced a fall may walk with decreased regularity and this should be explored in future studies.

  16. 77 FR 76078 - Regular Board of Directors Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Sunshine Act Meeting TIME & DATE: 2:00 p.m., Wednesday, January 9, 2013.... Call to Order II. Executive Session III. Approval of the Regular Board of Directors Meeting Minutes...

  17. 77 FR 15142 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... From the Federal Register Online via the Government Publishing Office ] NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 2:30 p.m., Monday, March 26, 2012.... Executive Session III. Approval of the Regular Board of Directors Meeting Minutes IV. Approval of the...

  18. 76 FR 74831 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 1:30 p.m., Monday, December 5... . AGENDA: I. Call to Order II. Executive Session III. Approval of the Regular Board of Directors...

  19. 29 CFR 553.233 - “Regular rate” defined.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... OF THE FAIR LABOR STANDARDS ACT TO EMPLOYEES OF STATE AND LOCAL GOVERNMENTS Fire Protection and Law Enforcement Employees of Public Agencies Overtime Compensation Rules § 553.233 “Regular rate” defined. The rules for computing an employee's “regular rate”, for purposes of the Act's overtime pay...

  20. Regular expression order-sorted unification and matching

    PubMed Central

    Kutsia, Temur; Marin, Mircea

    2015-01-01

    We extend order-sorted unification by permitting regular expression sorts for variables and in the domains of function symbols. The obtained signature corresponds to a finite bottom-up unranked tree automaton. We prove that regular expression order-sorted (REOS) unification is of type infinitary and decidable. The unification problem presented by us generalizes some known problems, such as, e.g., order-sorted unification for ranked terms, sequence unification, and word unification with regular constraints. Decidability of REOS unification implies that sequence unification with regular hedge language constraints is decidable, generalizing the decidability result of word unification with regular constraints to terms. A sort weakening algorithm helps to construct a minimal complete set of REOS unifiers from the solutions of sequence unification problems. Moreover, we design a complete algorithm for REOS matching, and show that this problem is NP-complete and the corresponding counting problem is #P-complete. PMID:26523088

  1. Generalized unitarity and six-dimensional helicity

    SciTech Connect

    Bern, Zvi; Dennen, Tristan; Huang, Yu-tin; Ita, Harald; Carrasco, John Joseph

    2011-04-15

    We combine the unitarity method with the six-dimensional helicity formalism of Cheung and O'Connell to construct loop-level scattering amplitudes. As a first example, we construct dimensionally regularized QCD one-loop four-point amplitudes. As a nontrivial multiloop example, we confirm that the recently constructed four-loop four-point amplitude of N=4 super-Yang-Mills theory, including nonplanar contributions, is valid for dimensions D{<=}6. We comment on the connection of our approach to the recently discussed Higgs infrared regulator and on dual conformal properties in six dimensions.

  2. Dimensional Duality

    SciTech Connect

    Green, Daniel; Lawrence, Albion; McGreevy, John; Morrison, David R.; Silverstein, Eva; /SLAC /Stanford U., Phys. Dept.

    2007-05-18

    We show that string theory on a compact negatively curved manifold, preserving a U(1)b1 winding symmetry, grows at least b1 new effective dimensions as the space shrinks. The winding currents yield a ''D-dual'' description of a Riemann surface of genus h in terms of its 2h dimensional Jacobian torus, perturbed by a closed string tachyon arising as a potential energy term in the worldsheet sigma model. D-branes on such negatively curved manifolds also reveal this structure, with a classical moduli space consisting of a b{sub 1}-torus. In particular, we present an AdS/CFT system which offers a non-perturbative formulation of such supercritical backgrounds. Finally, we discuss generalizations of this new string duality.

  3. Computation of dynamic stress intensity factors using the boundary element method based on Laplace transform and regularized boundary integral equations

    NASA Astrophysics Data System (ADS)

    Tanaka, Masataka; Nakamura, Masayuki; Aoki, Kazuhiko; Matsumoto, Toshiro

    1993-07-01

    This paper presents a computational method of dynamic stress intensity factors (DSIF) in two-dimensional problems. In order to obtain accurate numerical results of DSIF, the boundary element method based on the Laplace transform and regularized boundary integral equations is applied to the computation of transient elastodynamic responses. A computer program is newly developed for two-dimensional elastodynamics. Numerical computation of DSIF is carried out for a rectangular plate with a center crack under impact tension. Accuracy of the results is investigated from the viewpoint of computational conditions such as the number of sampling points of the inverse Laplace transform and the number of boundary elements.

  4. Group-sparsity regularization for ill-posed subsurface flow inverse problems

    NASA Astrophysics Data System (ADS)

    Golmohammadi, Azarang; Khaninezhad, Mohammad-Reza M.; Jafarpour, Behnam

    2015-10-01

    Sparse representations provide a flexible and parsimonious description of high-dimensional model parameters for reconstructing subsurface flow property distributions from limited data. To further constrain ill-posed inverse problems, group-sparsity regularization can take advantage of possible relations among the entries of unknown sparse parameters when: (i) groups of sparse elements are either collectively active or inactive and (ii) only a small subset of the groups is needed to approximate the parameters of interest. Since subsurface properties exhibit strong spatial connectivity patterns they may lead to sparse descriptions that satisfy the above conditions. When these conditions are established, a group-sparsity regularization can be invoked to facilitate the solution of the resulting inverse problem by promoting sparsity across the groups. The proposed regularization penalizes the number of groups that are active without promoting sparsity within each group. Two implementations are presented in this paper: one based on the multiresolution tree structure of Wavelet decomposition, without a need for explicit prior models, and another learned from explicit prior model realizations using sparse principal component analysis (SPCA). In each case, the approach first classifies the parameters of the inverse problem into groups with specific connectivity features, and then takes advantage of the grouped structure to recover the relevant patterns in the solution from the flow data. Several numerical experiments are presented to demonstrate the advantages of additional constraining power of group-sparsity in solving ill-posed subsurface model calibration problems.

  5. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS’s hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs’ spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets. PMID:25905027

  6. Two hybrid regularization frameworks for solving the electrocardiography inverse problem

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Liu, Feng; Crozier, Stuart

    2008-09-01

    In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.

  7. Learning rates of lq coefficient regularization learning with gaussian kernel.

    PubMed

    Lin, Shaobo; Zeng, Jinshan; Fang, Jian; Xu, Zongben

    2014-10-01

    Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l(q) regularization schemes with 0 < q < ∞ are central in use. It is known that different q leads to different properties of the deduced estimators, say, l(2) regularization leads to a smooth estimator, while l(1) regularization leads to a sparse estimator. Then how the generalization capability of l(q) regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l(q) coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all 0 < q < ∞. That is, the upper and lower bounds of learning rates for l(q) regularization learning are asymptotically identical for all 0 < q < ∞. Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.

  8. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  9. A local-order regularization for geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Gheymasi, H. Mohammadi; Gholami, A.

    2013-11-01

    Different types of regularization have been developed to obtain stable solutions to linear inverse problems. Among these, total variation (TV) is known as an edge preserver method, which leads to piecewise constant solutions and has received much attention for solving inverse problems arising in geophysical studies. However, the method shows staircase effects and is not suitable for the models including smooth regions. To overcome the staircase effect, we present a method, which employs a local-order difference operator in the regularization term. This method is performed in two steps: First, we apply a pre-processing step to find the edge locations in the regularized solution using a properly defined minmod limiter, where the edges are determined by a comparison of the solutions obtained using different order regularizations of the TV types. Then, we construct a local-order difference operator based on the information obtained from the pre-processing step about the edge locations, which is subsequently used as a regularization operator in the final sparsity-promoting regularization. Experimental results from the synthetic and real seismic traveltime tomography show that the proposed inversion method is able to retain the smooth regions of the regularized solution, while preserving sharp transitions presented in it.

  10. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  11. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp–Davis–Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  12. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  13. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  14. Reconstruction of 3D ultrasound images based on Cyclic Regularized Savitzky-Golay filters.

    PubMed

    Toonkum, Pollakrit; Suwanwela, Nijasri C; Chinrungrueng, Chedsada

    2011-02-01

    This paper presents a new three-dimensional (3D) ultrasound reconstruction algorithm for generation of 3D images from a series of two-dimensional (2D) B-scans acquired in the mechanical linear scanning framework. Unlike most existing 3D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the Cyclic Regularized Savitzky-Golay (CRSG) filter, is a new variant of the Savitzky-Golay (SG) smoothing filter. The CRSG filter has been improved upon the original SG filter in two respects: First, the cyclic indicator function has been incorporated into the least square cost function to enable the CRSG filter to approximate nonuniformly spaced data of the unobserved image intensities contained in unfilled voxels and reduce speckle noise of the observed image intensities contained in filled voxels. Second, the regularization function has been augmented to the least squares cost function as a mechanism to balance between the degree of speckle reduction and the degree of detail preservation. The CRSG filter has been evaluated and compared with the Voxel Nearest-Neighbor (VNN) interpolation post-processed by the Adaptive Speckle Reduction (ASR) filter, the VNN interpolation post-processed by the Adaptive Weighted Median (AWM) filter, the Distance-Weighted (DW) interpolation, and the Adaptive Distance-Weighted (ADW) interpolation, on reconstructing a synthetic 3D spherical image and a clinical 3D carotid artery bifurcation in the mechanical linear scanning framework. This preliminary evaluation indicates that the CRSG filter is more effective in both speckle reduction and geometric reconstruction of 3D ultrasound images than the other methods. PMID:20696448

  15. Exploring the spectrum of regularized bosonic string theory

    SciTech Connect

    Ambjørn, J. Makeenko, Y.

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  16. Regularity criterion for the 3D Hall-magneto-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dai, Mimi

    2016-07-01

    This paper studies the regularity problem for the 3D incompressible resistive viscous Hall-magneto-hydrodynamic (Hall-MHD) system. The Kolmogorov 41 phenomenological theory of turbulence [14] predicts that there exists a critical wavenumber above which the high frequency part is dominated by the dissipation term in the fluid equation. Inspired by this idea, we apply an approach of splitting the wavenumber combined with an estimate of the energy flux to obtain a new regularity criterion. The regularity condition presented here is weaker than conditions in the existing criteria (Prodi-Serrin type criteria) for the 3D Hall-MHD system.

  17. Factors distinguishing regular readers of breast cancer information in magazines.

    PubMed

    Johnson, J D

    1997-01-01

    This study examined the differences between women who were regular and occasional readers of breast cancer information in magazines. Based on uses and gratifications theory and the Health Belief Model, women respondents (n = 366) were predicted to differentially expose themselves to information. A discriminant analysis showed that women who were regular readers reported greater fear, perceived vulnerability, general health concern, personal experience, and surveillance need for breast cancer-related information. The results are discussed in terms of the potential positive and negative consequences of regular exposure to breast cancer information in magazines. PMID:9311097

  18. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object.

  19. Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography.

    PubMed

    Habermehl, Christina; Steinbrink, Jens; Müller, Klaus-Robert; Haufe, Stefan

    2014-09-01

    Functional near-infrared spectroscopy (fNIRS) is an optical method for noninvasively determining brain activation by estimating changes in the absorption of near-infrared light. Diffuse optical tomography (DOT) extends fNIRS by applying overlapping “high density” measurements, and thus providing a three-dimensional imaging with an improved spatial resolution. Reconstructing brain activation images with DOT requires solving an underdetermined inverse problem with far more unknowns in the volume than in the surface measurements. All methods of solving this type of inverse problem rely on regularization and the choice of corresponding regularization or convergence criteria. While several regularization methods are available, it is unclear how well suited they are for cerebral functional DOT in a semi-infinite geometry. Furthermore, the regularization parameter is often chosen without an independent evaluation, and it may be tempting to choose the solution that matches a hypothesis and rejects the other. In this simulation study, we start out by demonstrating how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods. To independently select the regularization parameter, we propose a cross-validation procedure which achieves a reconstruction quality close to the optimum. Additionally, we compare the outcome of seven different image reconstruction methods for cerebral functional DOT. The methods selected include reconstruction procedures that are already widely used for cerebral DOT [minimum l2-norm estimate (l2MNE) and truncated singular value decomposition], recently proposed sparse reconstruction algorithms [minimum l1- and a smooth minimum l0-norm estimate (l1MNE, l0MNE, respectively)] and a depth- and noise-weighted minimum norm (wMNE). Furthermore, we expand the range of algorithms for DOT by adapting two EEG-source localization algorithms [sparse basis field expansions and linearly

  20. Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Habermehl, Christina; Steinbrink, Jens; Müller, Klaus-Robert; Haufe, Stefan

    2014-09-01

    Functional near-infrared spectroscopy (fNIRS) is an optical method for noninvasively determining brain activation by estimating changes in the absorption of near-infrared light. Diffuse optical tomography (DOT) extends fNIRS by applying overlapping "high density" measurements, and thus providing a three-dimensional imaging with an improved spatial resolution. Reconstructing brain activation images with DOT requires solving an underdetermined inverse problem with far more unknowns in the volume than in the surface measurements. All methods of solving this type of inverse problem rely on regularization and the choice of corresponding regularization or convergence criteria. While several regularization methods are available, it is unclear how well suited they are for cerebral functional DOT in a semi-infinite geometry. Furthermore, the regularization parameter is often chosen without an independent evaluation, and it may be tempting to choose the solution that matches a hypothesis and rejects the other. In this simulation study, we start out by demonstrating how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods. To independently select the regularization parameter, we propose a cross-validation procedure which achieves a reconstruction quality close to the optimum. Additionally, we compare the outcome of seven different image reconstruction methods for cerebral functional DOT. The methods selected include reconstruction procedures that are already widely used for cerebral DOT [minimum ℓ2-norm estimate (ℓ2MNE) and truncated singular value decomposition], recently proposed sparse reconstruction algorithms [minimum ℓ1- and a smooth minimum ℓ0-norm estimate (ℓ1MNE, ℓ0MNE, respectively)] and a depth- and noise-weighted minimum norm (wMNE). Furthermore, we expand the range of algorithms for DOT by adapting two EEG-source localization algorithms [sparse basis field expansions and linearly

  1. Generic quantum walks with memory on regular graphs

    NASA Astrophysics Data System (ADS)

    Li, Dan; Mc Gettrick, Michael; Gao, Fei; Xu, Jie; Wen, Qiao-Yan

    2016-04-01

    Quantum walks with memory (QWM) are a type of modified quantum walks that record the walker's latest path. As we know, only two kinds of QWM have been presented up to now. It is desired to design more QWM for research, so that we can explore the potential of QWM. In this work, by presenting the one-to-one correspondence between QWM on a regular graph and quantum walks without memory (QWoM) on a line digraph of the regular graph, we construct a generic model of QWM on regular graphs. This construction gives a general scheme for building all possible standard QWM on regular graphs and makes it possible to study properties of different kinds of QWM. Here, by taking the simplest example, which is QWM with one memory on the line, we analyze some properties of QWM, such as variance, occupancy rate, and localization.

  2. Loop Invariants, Exploration of Regularities, and Mathematical Games.

    ERIC Educational Resources Information Center

    Ginat, David

    2001-01-01

    Presents an approach for illustrating, on an intuitive level, the significance of loop invariants for algorithm design and analysis. The illustration is based on mathematical games that require the exploration of regularities via problem-solving heuristics. (Author/MM)

  3. Regularization of the restricted problem of four bodies.

    NASA Technical Reports Server (NTRS)

    Giacaglia, G. E. O.

    1967-01-01

    Regularization of restricted three-body problem extended to case where three primaries of any mass revolve in circular orbits around common center of mass and fourth body of infinitesimal mass moves in their field

  4. What's Regular Exercise Worth? Maybe $2,500 Per Year

    MedlinePlus

    ... medlineplus.gov/news/fullstory_160859.html What's Regular Exercise Worth? Maybe $2,500 Per Year That's how ... afford the time and money to start an exercise routine? Maybe this will help: A new study ...

  5. Are Pupils in Special Education Too "Special" for Regular Education?

    NASA Astrophysics Data System (ADS)

    Pijl, Ysbrand J.; Pijl, Sip J.

    1998-01-01

    In the Netherlands special needs pupils are often referred to separate schools for the Educable Mentally Retarded (EMR) or the Learning Disabled (LD). There is an ongoing debate on how to reduce the growing numbers of special education placements. One of the main issues in this debate concerns the size of the difference in cognitive abilities between pupils in regular education and those eligible for LD or EMR education. In this study meta-analysis techniques were used to synthesize the findings from 31 studies on differences between pupils in regular primary education and those in special education in the Netherlands. Studies were grouped into three categories according to the type of measurements used: achievement, general intelligence and neuropsychological tests. It was found that pupils in regular education and those in special education differ in achievement and general intelligence. Pupils in schools for the educable mentally retarded in particular perform at a much lower level than is common in regular Dutch primary education.

  6. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  7. On almost regularity and π-normality of topological spaces

    NASA Astrophysics Data System (ADS)

    Saad Thabit, Sadeq Ali; Kamarulhaili, Hailiza

    2012-05-01

    π-Normality is a weaker version of normality. It was introduced by Kalantan in 2008. π-Normality lies between normality and almost normality (resp. quasi-normality). The importance of this topological property is that it behaves slightly different from normality and almost normality (quasi-normality). π-Normality is neither a productive nor a hereditary property in general. In this paper, some properties of almost regular spaces are presented. In particular, a few results on almost regular spaces are improved. Some relationships between almost regularity and π-normality are presented. π-Generalized closed sets are used to obtain a characterization and preservation theorems of π-normal spaces. Also, we investigate that an almost regular Lindelöf space (resp. with σ-locally finite base) is not necessarily π-normal by giving two counterexamples. An almost normality of the Rational Sequence topology is proved.

  8. A novel regularized edge-preserving super-resolution algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng

    2013-09-01

    Using super-resolution (SR) technology is a good approach to obtain high-resolution infrared image. However, Image super-resolution reconstruction is essentially an ill-posed problem, it is important to design an effective regularization term (image prior). Gaussian prior is widely used in the regularization term, but the reconstructed SR image becomes over-smoothness. Here, a novel regularization term called non-local means (NLM) term is derived based on the assumption that the natural image content is likely to repeat itself within some neighborhood. In the proposed framework, the estimated high image is obtained by minimizing a cost function. The iteration method is applied to solve the optimum problem. With the progress of iteration, the regularization term is adaptively updated. The proposed algorithm has been tested in several experiments. The experimental results show that the proposed approach is robust and can reconstruct higher quality images both in quantitative term and perceptual effect.

  9. Identifying basketball performance indicators in regular season and playoff games.

    PubMed

    García, Javier; Ibáñez, Sergio J; De Santos, Raúl Martinez; Leite, Nuno; Sampaio, Jaime

    2013-03-01

    The aim of the present study was to identify basketball game performance indicators which best discriminate winners and losers in regular season and playoffs. The sample used was composed by 323 games of ACB Spanish Basketball League from the regular season (n=306) and from the playoffs (n=17). A previous cluster analysis allowed splitting the sample in balanced (equal or below 12 points), unbalanced (between 13 and 28 points) and very unbalanced games (above 28 points). A discriminant analysis was used to identify the performance indicators either in regular season and playoff games. In regular season games, the winning teams dominated in assists, defensive rebounds, successful 2 and 3-point field-goals. However, in playoff games the winning teams' superiority was only in defensive rebounding. In practical applications, these results may help the coaches to accurately design training programs to reflect the importance of having different offensive set plays and also have specific conditioning programs to prepare for defensive rebounding.

  10. Two-Dimensional Systolic Array For Kalman-Filter Computing

    NASA Technical Reports Server (NTRS)

    Chang, Jaw John; Yeh, Hen-Geul

    1988-01-01

    Two-dimensional, systolic-array, parallel data processor performs Kalman filtering in real time. Algorithm rearranged to be Faddeev algorithm for generalized signal processing. Algorithm mapped onto very-large-scale integrated-circuit (VLSI) chip in two-dimensional, regular, simple, expandable array of concurrent processing cells. Processor does matrix/vector-based algebraic computations. Applications include adaptive control of robots, remote manipulators and flexible structures and processing radar signals to track targets.

  11. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  12. Estimating signal loss in regularized GRACE gravity field solutions

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Wahr, J. M.

    2011-05-01

    Gravity field solutions produced using data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission are subject to errors that increase as a function of increasing spatial resolution. Two commonly used techniques to improve the signal-to-noise ratio in the gravity field solutions are post-processing, via spectral filters, and regularization, which occurs within the least-squares inversion process used to create the solutions. One advantage of post-processing methods is the ability to easily estimate the signal loss resulting from the application of the spectral filter by applying the filter to synthetic gravity field coefficients derived from models of mass variation. This is a critical step in the construction of an accurate error budget. Estimating the amount of signal loss due to regularization, however, requires the execution of the full gravity field determination process to create synthetic instrument data; this leads to a significant cost in computation and expertise relative to post-processing techniques, and inhibits the rapid development of optimal regularization weighting schemes. Thus, while a number of studies have quantified the effects of spectral filtering, signal modification in regularized GRACE gravity field solutions has not yet been estimated. In this study, we examine the effect of one regularization method. First, we demonstrate that regularization can in fact be performed as a post-processing step if the solution covariance matrix is available. Regularization then is applied as a post-processing step to unconstrained solutions from the Center for Space Research (CSR), using weights reported by the Centre National d'Etudes Spatiales/Groupe de Recherches de geodesie spatiale (CNES/GRGS). After regularization, the power spectra of the CSR solutions agree well with those of the CNES/GRGS solutions. Finally, regularization is performed on synthetic gravity field solutions derived from a land surface model, revealing that in

  13. Note on regular black holes in a brane world

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2015-10-01

    In this work, we show that regular black holes in a Randall-Sundrum-type brane world model are generated by the nonlocal bulk influence, expressed by a constant parameter in the brane metric, only in the spherical case. In the axial case (black holes with rotation), this influence forbids them. A nonconstant bulk influence is necessary to generate regular black holes with rotation in this context.

  14. An adaptive Tikhonov regularization method for fluorescence molecular tomography.

    PubMed

    Cao, Xu; Zhang, Bin; Wang, Xin; Liu, Fei; Liu, Ke; Luo, Jianwen; Bai, Jing

    2013-08-01

    The high degree of absorption and scattering of photons propagating through biological tissues makes fluorescence molecular tomography (FMT) reconstruction a severe ill-posed problem and the reconstructed result is susceptible to noise in the measurements. To obtain a reasonable solution, Tikhonov regularization (TR) is generally employed to solve the inverse problem of FMT. However, with a fixed regularization parameter, the Tikhonov solutions suffer from low resolution. In this work, an adaptive Tikhonov regularization (ATR) method is presented. Considering that large regularization parameters can smoothen the solution with low spatial resolution, while small regularization parameters can sharpen the solution with high level of noise, the ATR method adaptively updates the spatially varying regularization parameters during the iteration process and uses them to penalize the solutions. The ATR method can adequately sharpen the feasible region with fluorescent probes and smoothen the region without fluorescent probes resorting to no complementary priori information. Phantom experiments are performed to verify the feasibility of the proposed method. The results demonstrate that the proposed method can improve the spatial resolution and reduce the noise of FMT reconstruction at the same time.

  15. The relationship between lifestyle regularity and subjective sleep quality

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  16. Nonlocal means-based regularizations for statistical CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Ma, Jianhua; Liu, Yan; Han, Hao; Li, Lihong; Wang, Jing; Liang, Zhengrong

    2014-03-01

    Statistical iterative reconstruction (SIR) methods have shown remarkable gains over the conventional filtered backprojection (FBP) method in improving image quality for low-dose computed tomography (CT). They reconstruct the CT images by maximizing/minimizing a cost function in a statistical sense, where the cost function usually consists of two terms: the data-fidelity term modeling the statistics of measured data, and the regularization term reflecting a prior information. The regularization term in SIR plays a critical role for successful image reconstruction, and an established family of regularizations is based on the Markov random field (MRF) model. Inspired by the success of nonlocal means (NLM) algorithm in image processing applications, we proposed, in this work, a family of generic and edgepreserving NLM-based regularizations for SIR. We evaluated one of them where the potential function takes the quadratic-form. Experimental results with both digital and physical phantoms clearly demonstrated that SIR with the proposed regularization can achieve more significant gains than SIR with the widely-used Gaussian MRF regularization and the conventional FBP method, in terms of image noise reduction and resolution preservation.

  17. Dimensional and temporal controls of three-dimensional cell migration by zyxin and binding partners

    PubMed Central

    Fraley, Stephanie I.; Feng, Yunfeng; Giri, Anjil; Longmore, Gregory D.; Wirtz, Denis

    2015-01-01

    Spontaneous molecular oscillations are ubiquitous in biology. But to our knowledge, periodic cell migratory patterns have not been observed. Here we report the highly regular, periodic migration of cells along rectilinear tracks generated inside three-dimensional matrices, with each excursion encompassing several cell lengths, a phenotype that does not occur on conventional substrates. Short hairpin RNA depletion shows that these one-dimensional oscillations are uniquely controlled by zyxin and binding partners α-actinin and p130Cas, but not vasodilator-stimulated phosphoprotein and cysteine-rich protein 1. Oscillations are recapitulated for cells migrating along one-dimensional micropatterns, but not on two-dimensional compliant substrates. These results indicate that although two-dimensional motility can be well described by speed and persistence, three-dimensional motility requires two additional parameters, the dimensionality of the cell paths in the matrix and the temporal control of cell movements along these paths. These results also suggest that the zyxin/α-actinin/p130Cas module may ensure that motile cells in a three-dimensional matrix explore the largest space possible in minimum time. PMID:22395610

  18. Regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular salmeterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and FDA submissions in relation to salmeterol. The date of the most recent search was August 2011. Selection criteria We included controlled parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular salmeterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 26 trials comparing salmeterol to placebo and eight trials comparing with salbutamol. These included 62,815 participants with asthma (including 2,599 children). In six trials (2,766 patients), no serious adverse event data could be obtained. All-cause mortality was higher with regular salmeterol than placebo but the increase was not significant (Peto odds ratio (OR) 1.33 (95% CI 0.85 to 2.08)). Non-fatal serious adverse events were significantly increased when regular salmeterol was compared with placebo (OR 1.15 95% CI 1.02 to 1.29). One extra serious adverse event occurred over 28 weeks for every 188 people

  19. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    SciTech Connect

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  20. Unexpected Regularity in Swimming Behavior of Clausocalanus furcatus Revealed by a Telecentric 3D Computer Vision System

    PubMed Central

    Bianco, Giuseppe; Botte, Vincenzo; Dubroca, Laurent; Ribera d’Alcalà, Maurizio; Mazzocchi, Maria Grazia

    2013-01-01

    Planktonic copepods display a large repertoire of motion behaviors in a three-dimensional environment. Two-dimensional video observations demonstrated that the small copepod Clausocalanus furcatus, one the most widely distributed calanoids at low to medium latitudes, presented a unique swimming behavior that was continuous and fast and followed notably convoluted trajectories. Furthermore, previous observations indicated that the motion of C. furcatus resembled a random process. We characterized the swimming behavior of this species in three-dimensional space using a video system equipped with telecentric lenses, which allow tracking of zooplankton without the distortion errors inherent in common lenses. Our observations revealed unexpected regularities in the behavior of C. furcatus that appear primarily in the horizontal plane and could not have been identified in previous observations based on lateral views. Our results indicate that the swimming behavior of C. furcatus is based on a limited repertoire of basic kinematic modules but exhibits greater plasticity than previously thought. PMID:23826331

  1. A model and regularization scheme for ultrasonic beamforming clutter reduction.

    PubMed

    Byram, Brett; Dei, Kazuyuki; Tierney, Jaime; Dumont, Douglas

    2015-11-01

    Acoustic clutter produced by off-axis and multipath scattering is known to cause image degradation, and in some cases these sources may be the prime determinants of in vivo image quality. We have previously shown some success addressing these sources of image degradation by modeling the aperture domain signal from different sources of clutter, and then decomposing aperture domain data using the modeled sources. Our previous model had some shortcomings including model mismatch and failure to recover B-Mode speckle statistics. These shortcomings are addressed here by developing a better model and by using a general regularization approach appropriate for the model and data. We present results with L1 (lasso), L2 (ridge), and L1/L2 combined (elastic-net) regularization methods. We call our new method aperture domain model image reconstruction (ADMIRE). Our results demonstrate that ADMIRE with L1 regularization, or weighted toward L1 in the case of elastic-net regularization, have improved image quality. L1 by itself works well, but additional improvements are seen with elastic-net regularization over the pure L1 constraint. On in vivo example cases, L1 regularization showed mean contrast improvements of 4.6 and 6.8 dB on fundamental and harmonic images, respectively. Elastic net regularization (α = 0.9) showed mean contrast improvements of 17.8 dB on fundamental images and 11.8 dB on harmonic images. We also demonstrate that in uncluttered Field II simulations the decluttering algorithm produces the same contrast, contrast-tonoise ratio, and speckle SNR as normal B-mode imaging, demonstrating that ADMIRE preserves typical image features.

  2. Multiscale regularized reconstruction for enhancing microcalcification in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir; Zhou, Chuan

    2012-03-01

    Digital breast tomosynthesis (DBT) holds strong promise for improving the sensitivity of detecting subtle mass lesions. Detection of microcalcifications is more difficult because of high noise and subtle signals in the large DBT volume. It is important to enhance the contrast-to-noise ratio (CNR) of microcalcifications in DBT reconstruction. A major challenge of implementing microcalcification enhancement or noise regularization in DBT reconstruction is to preserve the image quality of masses, especially those with ill-defined margins and subtle spiculations. We are developing a new multiscale regularization (MSR) method for the simultaneous algebraic reconstruction technique (SART) to improve the CNR of microcalcifications without compromising the quality of masses. Each DBT slice is stratified into different frequency bands via wavelet decomposition and the regularization method applies different degrees of regularization to different frequency bands to preserve features of interest and suppress noise. Regularization is constrained by a characteristic map to avoid smoothing subtle microcalcifications. The characteristic map is generated via image feature analysis to identify potential microcalcification locations in the DBT volume. The MSR method was compared to the non-convex total pvariation (TpV) method and SART with no regularization (NR) in terms of the CNR and the full width at half maximum of the line profiles intersecting calcifications and mass spiculations in DBT of human subjects. The results demonstrated that SART regularized by the MSR method was superior to the TpV method for subtle microcalcifications in terms of CNR enhancement. The MSR method preserved the quality of subtle spiculations better than the TpV method in comparison to NR.

  3. Particle motion and Penrose processes around rotating regular black hole

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.

  4. Another look at statistical learning theory and regularization.

    PubMed

    Cherkassky, Vladimir; Ma, Yunqian

    2009-09-01

    The paper reviews and highlights distinctions between function-approximation (FA) and VC theory and methodology, mainly within the setting of regression problems and a squared-error loss function, and illustrates empirically the differences between the two when data is sparse and/or input distribution is non-uniform. In FA theory, the goal is to estimate an unknown true dependency (or 'target' function) in regression problems, or posterior probability P(y/x) in classification problems. In VC theory, the goal is to 'imitate' unknown target function, in the sense of minimization of prediction risk or good 'generalization'. That is, the result of VC learning depends on (unknown) input distribution, while that of FA does not. This distinction is important because regularization theory originally introduced under clearly stated FA setting [Tikhonov, N. (1963). On solving ill-posed problem and method of regularization. Doklady Akademii Nauk USSR, 153, 501-504; Tikhonov, N., & V. Y. Arsenin (1977). Solution of ill-posed problems. Washington, DC: W. H. Winston], has been later used under risk-minimization or VC setting. More recently, several authors [Evgeniou, T., Pontil, M., & Poggio, T. (2000). Regularization networks and support vector machines. Advances in Computational Mathematics, 13, 1-50; Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference and prediction. Springer; Poggio, T. and Smale, S., (2003). The mathematics of learning: Dealing with data. Notices of the AMS, 50 (5), 537-544] applied constructive methodology based on regularization framework to learning dependencies from data (under VC-theoretical setting). However, such regularization-based learning is usually presented as a purely constructive methodology (with no clearly stated problem setting). This paper compares FA/regularization and VC/risk minimization methodologies in terms of underlying theoretical assumptions. The control of model

  5. The ARM Best Estimate 2-dimensional Gridded Surface

    SciTech Connect

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  6. Quasi-regular solutions to a class of 3D degenerating hyperbolic equations

    NASA Astrophysics Data System (ADS)

    Hristov, T. D.; Popivanov, N. I.; Schneider, M.

    2012-11-01

    In the fifties M. Protter stated new three-dimensional (3D) boundary value problems (BVP) for mixed type equations of first kind. For hyperbolic-elliptic equations they are multidimensional analogue of the classical two-dimensional (2D) Morawetz-Guderley transonic problem. Up to now, in this case, not a single example of nontrivial solution to the new problem, neither a general existence result is known. The difficulties appear even for BVP in the hyperbolic part of the domain, that were formulated by Protter for weakly hyperbolic equations. In that case the Protter problems are 3D analogues of the plane Darboux or Cauchy-Goursat problems. It is interesting that in contrast to the planar problems the new 3D problems are strongly ill-posed. Some of the Protter problems for degenerating hyperbolic equation without lower order terms or even for the usual wave equation have infinite-dimensional kernels. Therefore there are infinitely many orthogonality conditions for classical solvability of their adjiont problems. So it is interesting to obtain results for uniqueness of solutions adding first order terms in the equation. In the present paper we do this and find conditions for coefficients under which we prove uniqueness of quasi-regular solutions to the Protter problems.

  7. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  8. Regular treatment with formoterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Cates, Matthew J

    2014-01-01

    Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular formoterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and Food and Drug Administration (FDA) submissions in relation to formoterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular formoterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second author checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 22 studies (8032 participants) comparing regular formoterol to placebo and salbutamol. Non-fatal serious adverse event data could be obtained for all participants from published studies comparing formoterol and placebo but only 80% of those comparing formoterol with salbutamol or terbutaline. Three deaths occurred on regular formoterol and none on placebo; this difference was not statistically significant. It was not possible to assess disease-specific mortality in view of the small number of deaths. Non-fatal serious adverse events were significantly increased when

  9. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  10. On Nonperiodic Euler Flows with Hölder Regularity

    NASA Astrophysics Data System (ADS)

    Isett, Philip; Oh, Sung-Jin

    2016-08-01

    In (Isett, Regularity in time along the coarse scale flow for the Euler equations, 2013), the first author proposed a strengthening of Onsager's conjecture on the failure of energy conservation for incompressible Euler flows with Hölder regularity not exceeding {1/3}. This stronger form of the conjecture implies that anomalous dissipation will fail for a generic Euler flow with regularity below the Onsager critical space {L_t^∞ B_{3,∞}^{1/3}} due to low regularity of the energy profile. This paper is the first and main paper in a series of two, the results of which may be viewed as first steps towards establishing the conjectured failure of energy regularity for generic solutions with Hölder exponent less than {1/5}. The main result of the present paper shows that any given smooth Euler flow can be perturbed in {C^{1/5-ɛ}_{t,x}} on any pre-compact subset of R× R^3 to violate energy conservation. Furthermore, the perturbed solution is no smoother than {C^{1/5-ɛ}_{t,x}}. As a corollary of this theorem, we show the existence of nonzero {C^{1/5-ɛ}_{t,x}} solutions to Euler with compact space-time support, generalizing previous work of the first author (Isett, Hölder continuous Euler flows in three dimensions with compact support in time, 2012) to the nonperiodic setting.

  11. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  12. A Generic Path Algorithm for Regularized Statistical Estimation

    PubMed Central

    Zhou, Hua; Wu, Yichao

    2014-01-01

    Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints lead to the popular lasso, fused-lasso, and other generalized ℓ1 penalized regression methods. In this article we follow a recent idea by Wu (2011, 2012) and propose an exact path solver based on ordinary differential equations (EPSODE) that works for any convex loss function and can deal with generalized ℓ1 penalties as well as more complicated regularization such as inequality constraints encountered in shape-restricted regressions and nonparametric density estimation. Non-asymptotic error bounds for the equality regularized estimates are derived. In practice, the EPSODE can be coupled with AIC, BIC, Cp or cross-validation to select an optimal tuning parameter, or provides a convenient model space for performing model averaging or aggregation. Our applications to generalized ℓ1 regularized generalized linear models, shape-restricted regressions, Gaussian graphical models, and nonparametric density estimation showcase the potential of the EPSODE algorithm. PMID:25242834

  13. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  14. Image Super-Resolution via Adaptive Regularization and Sparse Representation.

    PubMed

    Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei

    2016-07-01

    Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

  15. Fast multislice fluorescence molecular tomography using sparsity-inducing regularization.

    PubMed

    Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba

    2016-02-01

    Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.

  16. Spatially varying regularization of deconvolution in 3D microscopy.

    PubMed

    Seo, J; Hwang, S; Lee, J-M; Park, H

    2014-08-01

    Confocal microscopy has become an essential tool to explore biospecimens in 3D. Confocal microcopy images are still degraded by out-of-focus blur and Poisson noise. Many deconvolution methods including the Richardson-Lucy (RL) method, Tikhonov method and split-gradient (SG) method have been well received. The RL deconvolution method results in enhanced image quality, especially for Poisson noise. Tikhonov deconvolution method improves the RL method by imposing a prior model of spatial regularization, which encourages adjacent voxels to appear similar. The SG method also contains spatial regularization and is capable of incorporating many edge-preserving priors resulting in improved image quality. The strength of spatial regularization is fixed regardless of spatial location for the Tikhonov and SG method. The Tikhonov and the SG deconvolution methods are improved upon in this study by allowing the strength of spatial regularization to differ for different spatial locations in a given image. The novel method shows improved image quality. The method was tested on phantom data for which ground truth and the point spread function are known. A Kullback-Leibler (KL) divergence value of 0.097 is obtained with applying spatially variable regularization to the SG method, whereas KL value of 0.409 is obtained with the Tikhonov method. In tests on a real data, for which the ground truth is unknown, the reconstructed data show improved noise characteristics while maintaining the important image features such as edges.

  17. Regular and Irregular Mixing in Hydrocarbon Block Copolymers

    NASA Astrophysics Data System (ADS)

    Register, Richard; Beckingham, Bryan

    2014-03-01

    Since hydrocarbon polymers interact through relatively simple (dispersive) interactions, one might expect them to be described by simple models of mixing energetics, such as regular mixing. However, the pioneering work of Graessley on saturated hydrocarbon polymer blends showed that while regular mixing is obeyed in some cases, both positive and negative deviations (in the magnitude of the mixing enthalpy) from regular mixing are observed in other cases. Here, we describe the mixing energetics for two series of hydrocarbon polymers wherein the interaction strengths may be continuously tuned, and which can be readily incorporated into block copolymers. Random copolymers of styrene and medium-vinyl isoprene, in which either the isoprene or both the isoprene and styrene units have been saturated, obey regular mixing over the entire composition range and for both hydrogenated derivatives. Well-defined block copolymers with arbitrarily small interblock interaction strengths can be constructed from these units, permitting the interdomain spacing to be made arbitrarily large while holding the order-disorder transition temperature constant. However, block copolymers of hydrogenated polybutadiene with such random copolymers show very strong positive deviations from regular mixing when the styrene aromaticity is preserved, and sizable negative deviations when the styrene units are saturated to vinylcyclohexane. Both of these cases can be quantitatively described by a ternary mixing model.

  18. Modeling and Analyzing Web Service Behavior with Regular Flow Nets

    NASA Astrophysics Data System (ADS)

    Xie, Jingang; Tan, Qingping; Cao, Guorong

    Web services are emerging as a promising technology for the development of next generation distributed heterogeneous software systems. To support automated service composition and adaptation, there should be a formal approach for modeling Web service behavior. In this paper we present a novel methodology of modeling and analyzing based on regular flow nets—extended from Petri nets and YAWL. Firstly, we motivate the formal definition of regular flow nets. Secondly, the formalism for dealing with symbolic marking is developed and it is used to define symbolic coverability tree. Finally, an algorithm for generating symbolic coverability tree is presented. Using symbolic coverability tree we can analyze the properties of regular flow nets we concerned. The technology of modeling and analyzing we proposed allows us to deal with cyclic services and data dependence among services.

  19. Optimized Bayes variational regularization prior for 3D PET images.

    PubMed

    Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

    2014-09-01

    A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

  20. Regularity based descriptor computed from local image oscillations.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo; Legrand, Pierrick; Lutton, Evelyne

    2007-05-14

    This work presents a novel local image descriptor based on the concept of pointwise signal regularity. Local image regions are extracted using either an interest point or an interest region detector, and discriminative feature vectors are constructed by uniformly sampling the pointwise Hölderian regularity around each region center. Regularity estimation is performed using local image oscillations, the most straightforward method directly derived from the definition of the Hölder exponent. Furthermore, estimating the Hölder exponent in this manner has proven to be superior, in most cases, when compared to wavelet based estimation as was shown in previous work. Our detector shows invariance to illumination change, JPEG compression, image rotation and scale change. Results show that the proposed descriptor is stable with respect to variations in imaging conditions, and reliable performance metrics prove it to be comparable and in some instances better than SIFT, the state-of-the-art in local descriptors. PMID:19546918

  1. Analysis of the "Learning in Regular Classrooms" movement in China.

    PubMed

    Deng, M; Manset, G

    2000-04-01

    The Learning in Regular Classrooms experiment has evolved in response to China's efforts to educate its large population of students with disabilities who, until the mid-1980s, were denied a free education. In the Learning in Regular Classrooms, students with disabilities (primarily sensory impairments or mild mental retardation) are educated in neighborhood schools in mainstream classrooms. Despite difficulties associated with developing effective inclusive programming, this approach has contributed to a major increase in the enrollment of students with disabilities and increased involvement of schools, teachers, and parents in China's newly developing special education system. Here we describe the development of the Learning in Regular Classroom approach and the challenges associated with educating students with disabilities in China.

  2. Methods for determining regularization for atmospheric retrieval problems

    NASA Astrophysics Data System (ADS)

    Steck, Tilman

    2002-03-01

    The atmosphere of Earth has already been investigated by several spaceborne instruments, and several further instruments will be launched, e.g., NASA's Earth Observing System Aura platform and the European Space Agency's Environmental Satellite. To stabilize the results in atmospheric retrievals, constraints are used in the iteration process. Therefore hard constraints (discretization of the retrieval grid) and soft constraints (regularization operators) are included in the retrieval. Tikhonov regularization is often used as a soft constraint. In this study, different types of Tikhonov operator were compared, and several new methods were developed to determine the optimal strength of the constraint operationally. The resulting regularization parameters were applied successfully to an ozone retrieval from simulated nadir sounding spectra like those expected to be measured by the Tropospheric Emission Spectrometer, which is part of the Aura platform. Retrievals were characterized by means of estimated error, averaging kernel, vertical resolution, and degrees of freedom.

  3. Structural characterization of the packings of granular regular polygons

    NASA Astrophysics Data System (ADS)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  4. Breast ultrasound tomography with total-variation regularization

    SciTech Connect

    Huang, Lianjie; Li, Cuiping; Duric, Neb

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  5. Radial basis function networks and complexity regularization in function learning.

    PubMed

    Krzyzak, A; Linder, T

    1998-01-01

    In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.

  6. Manufacture of Regularly Shaped Sol-Gel Pellets

    NASA Technical Reports Server (NTRS)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  7. Three-dimensional supersonic internal flows

    NASA Astrophysics Data System (ADS)

    Mohan, J. A.; Skews, B. W.

    2013-09-01

    In order to examine the transition between regular and Mach reflection in a three-dimensional flow, a range of special geometry test pieces, and inlets, were designed. The concept is to have a geometry consisting of two plane wedges which results in regular reflection between the incident waves off the top and bottom of the inlet capped by two curved end sections causing Mach reflection. The merging of these two reflection patterns and the resulting downstream flow are studied using laser vapor screen and shadowgraph imaging supported by numerical simulation. An angled Mach disc is formed which merges with the line of regular reflection. A complex wave pattern results with the generation of a bridging shock connecting the reflected wave from the Mach reflection with the reflected waves from the regular reflection. In order to experimentally access the flow within the duct, a number of tests were conducted with one end cap removed. This resulted in a modified flow due to the expansive flow at the open end the influence of which was also studied in more detail.

  8. Local conservative regularizations of compressible magnetohydrodynamic and neutral flows

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2016-02-01

    Ideal systems like magnetohydrodynamics (MHD) and Euler flow may develop singularities in vorticity ( w =∇×v ). Viscosity and resistivity provide dissipative regularizations of the singularities. In this paper, we propose a minimal, local, conservative, nonlinear, dispersive regularization of compressible flow and ideal MHD, in analogy with the KdV regularization of the 1D kinematic wave equation. This work extends and significantly generalizes earlier work on incompressible Euler and ideal MHD. It involves a micro-scale cutoff length λ which is a function of density, unlike in the incompressible case. In MHD, it can be taken to be of order the electron collisionless skin depth c/ωpe. Our regularization preserves the symmetries of the original systems and, with appropriate boundary conditions, leads to associated conservation laws. Energy and enstrophy are subject to a priori bounds determined by initial data in contrast to the unregularized systems. A Hamiltonian and Poisson bracket formulation is developed and applied to generalize the constitutive relation to bound higher moments of vorticity. A "swirl" velocity field is identified, and shown to transport w/ρ and B/ρ, generalizing the Kelvin-Helmholtz and Alfvén theorems. The steady regularized equations are used to model a rotating vortex, MHD pinch, and a plane vortex sheet. The proposed regularization could facilitate numerical simulations of fluid/MHD equations and provide a consistent statistical mechanics of vortices/current filaments in 3D, without blowup of enstrophy. Implications for detailed analyses of fluid and plasma dynamic systems arising from our work are briefly discussed.

  9. Zigzag stacks and m-regular linear stacks.

    PubMed

    Chen, William Y C; Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2014-12-01

    The contact map of a protein fold is a graph that represents the patterns of contacts in the fold. It is known that the contact map can be decomposed into stacks and queues. RNA secondary structures are special stacks in which the degree of each vertex is at most one and each arc has length of at least two. Waterman and Smith derived a formula for the number of RNA secondary structures of length n with exactly k arcs. Höner zu Siederdissen et al. developed a folding algorithm for extended RNA secondary structures in which each vertex has maximum degree two. An equation for the generating function of extended RNA secondary structures was obtained by Müller and Nebel by using a context-free grammar approach, which leads to an asymptotic formula. In this article, we consider m-regular linear stacks, where each arc has length at least m and the degree of each vertex is bounded by two. Extended RNA secondary structures are exactly 2-regular linear stacks. For any m ≥ 2, we obtain an equation for the generating function of the m-regular linear stacks. For given m, we deduce a recurrence relation and an asymptotic formula for the number of m-regular linear stacks on n vertices. To establish the equation, we use the reduction operation of Chen, Deng, and Du to transform an m-regular linear stack to an m-reduced zigzag (or alternating) stack. Then we find an equation for m-reduced zigzag stacks leading to an equation for m-regular linear stacks. PMID:25455155

  10. Regularization of languages by adults and children: A mathematical framework.

    PubMed

    Rische, Jacquelyn L; Komarova, Natalia L

    2016-02-01

    The fascinating ability of humans to modify the linguistic input and "create" a language has been widely discussed. In the work of Newport and colleagues, it has been demonstrated that both children and adults have some ability to process inconsistent linguistic input and "improve" it by making it more consistent. In Hudson Kam and Newport (2009), artificial miniature language acquisition from an inconsistent source was studied. It was shown that (i) children are better at language regularization than adults and that (ii) adults can also regularize, depending on the structure of the input. In this paper we create a learning algorithm of the reinforcement-learning type, which exhibits patterns reported in Hudson Kam and Newport (2009) and suggests a way to explain them. It turns out that in order to capture the differences between children's and adults' learning patterns, we need to introduce a certain asymmetry in the learning algorithm. Namely, we have to assume that the reaction of the learners differs depending on whether or not the source's input coincides with the learner's internal hypothesis. We interpret this result in the context of a different reaction of children and adults to implicit, expectation-based evidence, positive or negative. We propose that a possible mechanism that contributes to the children's ability to regularize an inconsistent input is related to their heightened sensitivity to positive evidence rather than the (implicit) negative evidence. In our model, regularization comes naturally as a consequence of a stronger reaction of the children to evidence supporting their preferred hypothesis. In adults, their ability to adequately process implicit negative evidence prevents them from regularizing the inconsistent input, resulting in a weaker degree of regularization. PMID:26580218

  11. McGehee regularization of general SO(3)-invariant potentials and applications to stationary and spherically symmetric spacetimes

    NASA Astrophysics Data System (ADS)

    Galindo, Pablo; Mars, Marc

    2014-12-01

    The McGehee regularization is a method to study the singularity at the origin of the dynamical system describing a point particle in a plane moving under the action of a power-law potential. It was used by Belbruno and Pretorius (2011 Class. Quantum Grav. 28 195007) to perform a dynamical system regularization of the singularity at the center of the motion of massless test particles in the Schwarzschild spacetime. In this paper, we generalize the McGehee transformation so that we can regularize the singularity at the origin of the dynamical system describing the motion of causal geodesics (timelike or null) in any stationary and spherically symmetric spacetime of Kerr-Schild form. We first show that the geodesics for both massive and massless particles can be described globally in the Kerr-Schild spacetime as the motion of a Newtonian point particle in a suitable radial potential and study the conditions under which the central singularity can be regularized using an extension of the McGehee method. As an example, we apply these results to causal geodesics in the Schwarzschild and Reissner-Nordström spacetimes. Interestingly, the geodesic trajectories in the whole maximal extension of both spacetimes can be described by a single two-dimensional phase space with non-trivial topology. This topology arises from the presence of excluded regions in the phase space determined by the condition that the tangent vector of the geodesic be causal and future directed.

  12. Adiabatic regularization of power spectra in nonminimally coupled chaotic inflation

    NASA Astrophysics Data System (ADS)

    Alinea, Allan L.

    2016-10-01

    We investigate the effect of adiabatic regularization on both the tensor- and scalar-perturbation power spectra in nonminimally coupled chaotic inflation. Similar to that of the minimally coupled general single-field inflation, we find that the subtraction term is suppressed by an exponentially decaying factor involving the number of e -folds. By following the subtraction term long enough beyond horizon crossing, the regularized power spectrum tends to the ``bare'' power spectrum. This study justifies the use of the unregularized (``bare'') power spectrum in standard calculations.

  13. Duration of growth suppressive effects of regular inhaled corticosteroids

    PubMed Central

    Doull, I.; Campbell, M.; Holgate, S.

    1998-01-01

    The growth of 50 children receiving regular inhaled corticosteroids was segregated into divisions of six weeks from the start of treatment and compared with their growth when not receiving regular corticosteroids using a random effects regression model. Growth suppression was most marked during the initial six weeks after starting treatment, with most suppression occurring during the initial 18 weeks. Thereafter the children's growth was similar to their growth when not receiving treatment. These findings have important consequences for patterns of treatment of asthma in children.

 PMID:9579164

  14. Study of the Navier-Stokes regularity problem with critical norms

    NASA Astrophysics Data System (ADS)

    Ohkitani, Koji

    2016-04-01

    We study the basic problems of regularity of the Navier-Stokes equations. The blowup criteria on the basis of the critical {H}1/2-norm, is bounded from above by a logarithmic function, (Robinson et al 2012 J. Math. Phys. 53 115618). Assuming that the Cauchy-Schwarz inequality for the {H}1/2-norm is not an overestimate, we replace it by a square-root of a product of the energy and the enstrophy. We carry out a simple asymptotic analysis to determine the time evolution of the energy. This generalises the (already ruled-out) self-similar blowup ansatz. Some numerical results are also presented, which support the above-mentioned replacement. We carry out a similar analysis for the four-dimensional Navier-Stokes equations.

  15. Regular network model for the sea ice-albedo feedback in the Arctic.

    PubMed

    Müller-Stoffels, Marc; Wackerbauer, Renate

    2011-03-01

    The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.

  16. Self-propelled particle transport in regular arrays of rigid asymmetric obstacles.

    PubMed

    Potiguar, Fabricio Q; Farias, G A; Ferreira, W P

    2014-07-01

    We report numerical results which show the achievement of net transport of self-propelled particles (SPPs) in the presence of a two-dimensional regular array of convex, either symmetric or asymmetric, rigid obstacles. The repulsive interparticle (soft disks) and particle-obstacle interactions present no alignment rule. We find that SPPs present a vortex-type motion around convex symmetric obstacles even in the absence of hydrodynamic effects. Such a motion is not observed for a single SPP, but is a consequence of the collective motion of SPPs around the obstacles. A steady particle current is spontaneously established in an array of nonsymmetric convex obstacles (which presents no cavity in which particles may be trapped), and in the absence of an external field. Our results are mainly a consequence of the tendency of the self-propelled particles to attach to solid surfaces.

  17. Diffraction of a shock wave by a compression corner; regular and single Mach reflection

    NASA Technical Reports Server (NTRS)

    Vijayashankar, V. S.; Kutler, P.; Anderson, D.

    1976-01-01

    The two dimensional, time dependent Euler equations which govern the flow field resulting from the injection of a planar shock with a compression corner are solved with initial conditions that result in either regular reflection or single Mach reflection of the incident planar shock. The Euler equations which are hyperbolic are transformed to include the self similarity of the problem. A normalization procedure is employed to align the reflected shock and the Mach stem as computational boundaries to implement the shock fitting procedure. A special floating fitting scheme is developed in conjunction with the method of characteristics to fit the slip surface. The reflected shock, the Mach stem, and the slip surface are all treated as harp discontinuities, thus, resulting in a more accurate description of the inviscid flow field. The resulting numerical solutions are compared with available experimental data and existing first-order, shock-capturing numerical solutions.

  18. Spatial-temporal total variation regularization (STTVR) for 4D-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Haibo; Maier, Andreas; Fahrig, Rebecca; Hornegger, Joachim

    2012-03-01

    Four dimensional computed tomography (4D-CT) is very important for treatment planning in thorax or abdomen area, e.g. for guiding radiation therapy planning. The respiratory motion makes the reconstruction problem illposed. Recently, compressed sensing theory was introduced. It uses sparsity as a prior to solve the problem and improves image quality considerably. However, the images at each phase are reconstructed individually. The correlations between neighboring phases are not considered in the reconstruction process. In this paper, we propose the spatial-temporal total variation regularization (STTVR) method which not only employs the sparsity in the spatial domain but also in the temporal domain. The algorithm is validated with XCAT thorax phantom. The Euclidean norm of the reconstructed image and ground truth is calculated for evaluation. The results indicate that our method improves the reconstruction quality by more than 50% compared to standard ART.

  19. Self-equilibrium and stability of regular truncated tetrahedral tensegrity structures

    NASA Astrophysics Data System (ADS)

    Zhang, J. Y.; Ohsaki, M.

    2012-10-01

    This paper presents analytical conditions of self-equilibrium and super-stability for the regular truncated tetrahedral tensegrity structures, nodes of which have one-to-one correspondence to the tetrahedral group. These conditions are presented in terms of force densities, by investigating the block-diagonalized force density matrix. The block-diagonalized force density matrix, with independent sub-matrices lying on its leading diagonal, is derived by making use of the tetrahedral symmetry via group representation theory. The condition for self-equilibrium is found by enforcing the force density matrix to have the necessary number of nullities, which is four for three-dimensional structures. The condition for super-stability is further presented by guaranteeing positive semi-definiteness of the force density matrix.

  20. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators.

    PubMed

    Wolfrum, Matthias; Omel'chenko, Oleh E; Sieber, Jan

    2015-05-01

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.

  1. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    SciTech Connect

    Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan

    2015-05-15

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.

  2. Regularized iterative weighted filtered backprojection for helical cone-beam CT

    SciTech Connect

    Sunnegaardh, Johan; Danielsson, Per-Erik

    2008-09-15

    Contemporary reconstruction methods employed for clinical helical cone-beam computed tomography (CT) are analytical (noniterative) but mathematically nonexact, i.e., the reconstructed image contains so called cone-beam artifacts, especially for higher cone angles. Besides cone artifacts, these methods also suffer from windmill artifacts: alternating dark and bright regions creating spiral-like patterns occurring in the vicinity of high z-direction derivatives. In this article, the authors examine the possibility to suppress cone and windmill artifacts by means of iterative application of nonexact three-dimensional filtered backprojection, where the analytical part of the reconstruction brings about accelerated convergence. Specifically, they base their investigations on the weighted filtered backprojection method [Stierstorfer et al., Phys. Med. Biol. 49, 2209-2218 (2004)]. Enhancement of high frequencies and amplification of noise is a common but unwanted side effect in many acceleration attempts. They have employed linear regularization to avoid these effects and to improve the convergence properties of the iterative scheme. Artifacts and noise, as well as spatial resolution in terms of modulation transfer functions and slice sensitivity profiles have been measured. The results show that for cone angles up to {+-}2.78 deg., cone artifacts are suppressed and windmill artifacts are alleviated within three iterations. Furthermore, regularization parameters controlling spatial resolution can be tuned so that image quality in terms of spatial resolution and noise is preserved. Simulations with higher number of iterations and long objects (exceeding the measured region) verify that the size of the reconstructible region is not reduced, and that the regularization greatly improves the convergence properties of the iterative scheme. Taking these results into account, and the possibilities to extend the proposed method with more accurate modeling of the acquisition

  3. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    PubMed

    Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.

  4. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  5. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    PubMed

    Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  6. Regularization design in penalized maximum-likelihood image reconstruction for lesion detection in 3D PET

    NASA Astrophysics Data System (ADS)

    Yang, Li; Zhou, Jian; Ferrero, Andrea; Badawi, Ramsey D.; Qi, Jinyi

    2014-01-01

    Detecting cancerous lesions is a major clinical application in emission tomography. In previous work, we have studied penalized maximum-likelihood (PML) image reconstruction for the detection task and proposed a method to design a shift-invariant quadratic penalty function to maximize detectability of a lesion at a known location in a two dimensional image. Here we extend the regularization design to maximize detectability of lesions at unknown locations in fully 3D PET. We used a multiview channelized Hotelling observer (mvCHO) to assess the lesion detectability in 3D images to mimic the condition where a human observer examines three orthogonal views of a 3D image for lesion detection. We derived simplified theoretical expressions that allow fast prediction of the detectability of a 3D lesion. The theoretical results were used to design the regularization in PML reconstruction to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the optimized penalty with the conventional penalty for detecting lesions of various sizes. Only true coincidence events were simulated. Lesion detectability was also assessed by two human observers, whose performances agree well with that of the mvCHO. Both the numerical observer and human observer results showed a statistically significant improvement in lesion detection by using the proposed penalty function compared to using the conventional penalty function.

  7. Molecular cancer classification using a meta-sample-based regularized robust coding method

    PubMed Central

    2014-01-01

    Motivation Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. Results In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Conclusions Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods. PMID:25473795

  8. Hadamard regularization of the third post-Newtonian gravitational wave generation of two point masses

    SciTech Connect

    Blanchet, Luc; Iyer, Bala R.

    2005-01-15

    Continuing previous work on the 3PN-accurate gravitational-wave generation from point-particle binaries, we obtain the binary's 3PN mass-type quadrupole and dipole moments for general (not necessarily circular) orbits in harmonic coordinates. The final expressions are given in terms of their core parts, resulting from the application of the pure-Hadamard-Schwartz self-field regularization scheme, and augmented by an ambiguous part. In the case of the 3PN quadrupole we find three ambiguity parameters, {xi}, {kappa} and {zeta}, but only one for the 3PN dipole, in the form of the particular combination {xi}+{kappa}. Requiring that the dipole moment agree with the center-of-mass position deduced from the 3PN equations of motion in harmonic coordinates yields the relation {xi}+{kappa}=-9871/9240. Our results will form the basis of the complete calculation of the 3PN radiation field of compact binaries by means of dimensional regularization.

  9. Critical behavior of the XY-rotor model on regular and small-world networks.

    PubMed

    De Nigris, Sarah; Leoncini, Xavier

    2013-07-01

    We study the XY rotors model on small networks whose number of links scales with the system size N(links)~N(γ), where 1≤γ≤2. We first focus on regular one-dimensional rings in the microcanonical ensemble. For γ<1.5 the model behaves like a short-range one and no phase transition occurs. For γ>1.5, the system equilibrium properties are found to be identical to the mean field, which displays a second-order phase transition at a critical energy density ε=E/N,ε(c)=0.75. Moreover, for γ(c)~/=1.5 we find that a nontrivial state emerges, characterized by an infinite susceptibility. We then consider small-world networks, using the Watts-Strogatz mechanism on the regular networks parametrized by γ. We first analyze the topology and find that the small-world regime appears for rewiring probabilities which scale as p(SW)[proportionality]1/N(γ). Then considering the XY-rotors model on these networks, we find that a second-order phase transition occurs at a critical energy ε(c) which logarithmically depends on the topological parameters p and γ. We also define a critical probability p(MF), corresponding to the probability beyond which the mean field is quantitatively recovered, and we analyze its dependence on γ.

  10. Minimum divergence viscous flow simulation through finite difference and regularization techniques

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Mirabolghasemi, Maryam; Bryant, Steven L.; Prodanović, Maša

    2016-09-01

    We develop a new algorithm to simulate single- and two-phase viscous flow through a three-dimensional Cartesian representation of the porous space, such as those available through X-ray microtomography. We use the finite difference method to discretize the governing equations and also propose a new method to enforce the incompressible flow constraint under zero Neumann boundary conditions for the velocity components. Finite difference formulation leads to fast parallel implementation through linear solvers for sparse matrices, allowing relatively fast simulations, while regularization techniques used on solving inverse problems lead to the desired incompressible fluid flow. Tests performed using benchmark samples show good agreement with experimental/theoretical values. Additional tests are run on Bentheimer and Buff Berea sandstone samples with available laboratory measurements. We compare the results from our new method, based on finite differences, with an open source finite volume implementation as well as experimental results, specifically to evaluate the benefits and drawbacks of each method. Finally, we calculate relative permeability by using this modified finite difference technique together with a level set based algorithm for multi-phase fluid distribution in the pore space. To our knowledge this is the first time regularization techniques are used in combination with finite difference fluid flow simulations.

  11. Rigidity percolation by next-nearest-neighbor bonds on generic and regular isostatic lattices.

    PubMed

    Zhang, Leyou; Rocklin, D Zeb; Chen, Bryan Gin-ge; Mao, Xiaoming

    2015-03-01

    We study rigidity percolation transitions in two-dimensional central-force isostatic lattices, including the square and the kagome lattices, as next-nearest-neighbor bonds ("braces") are randomly added to the system. In particular, we focus on the differences between regular lattices, which are perfectly periodic, and generic lattices with the same topology of bonds but whose sites are at random positions in space. We find that the regular square and kagome lattices exhibit a rigidity percolation transition when the number of braces is ∼LlnL, where L is the linear size of the lattice. This transition exhibits features of both first-order and second-order transitions: The whole lattice becomes rigid at the transition, and a diverging length scale also exists. In contrast, we find that the rigidity percolation transition in the generic lattices occur when the number of braces is very close to the number obtained from Maxwell's law for floppy modes, which is ∼L. The transition in generic lattices is a very sharp first-order-like transition, at which the addition of one brace connects all small rigid regions in the bulk of the lattice, leaving only floppy modes on the edge. We characterize these transitions using numerical simulations and develop analytic theories capturing each transition. Our results relate to other interesting problems, including jamming and bootstrap percolation. PMID:25871071

  12. A Unified Approach for Solving Nonlinear Regular Perturbation Problems

    ERIC Educational Resources Information Center

    Khuri, S. A.

    2008-01-01

    This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…

  13. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  14. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  15. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  16. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  17. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  18. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  19. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  20. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  1. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work Application of Principles in Relation to Other Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... employees. In determining what activities constitute hours of work under the Act, there is generally...

  2. 5 CFR 551.511 - Hourly regular rate of pay.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Hourly regular rate of pay. 551.511 Section 551.511 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Overtime Pay Provisions Overtime Pay Computations §...

  3. Regularized Partial and/or Constrained Redundancy Analysis

    ERIC Educational Resources Information Center

    Takane, Yoshio; Jung, Sunho

    2008-01-01

    Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time,…

  4. Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing

    ERIC Educational Resources Information Center

    Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.

    2012-01-01

    Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…

  5. New Technologies in Portugal: Regular Middle and High School

    ERIC Educational Resources Information Center

    Florentino, Teresa; Sanchez, Lucas; Joyanes, Luis

    2010-01-01

    Purpose: The purpose of this paper is to elaborate upon the relation between information and communication technologies (ICT), particularly web-based resources, and their use, programs and learning in Portuguese middle and high regular public schools. Design/methodology/approach: Adding collected documentation on curriculum, laws and other related…

  6. Preverbal Infants Infer Intentional Agents from the Perception of Regularity

    ERIC Educational Resources Information Center

    Ma, Lili; Xu, Fei

    2013-01-01

    Human adults have a strong bias to invoke intentional agents in their intuitive explanations of ordered wholes or regular compositions in the world. Less is known about the ontogenetic origin of this bias. In 4 experiments, we found that 9-to 10-month-old infants expected a human hand, but not a mechanical tool with similar affordances, to be the…

  7. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school... SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  8. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... status when appointed as cadets. (b) Regular category applicants must arrange to have their high school... SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14.... Applicants not selected are reassigned on Academy notification to the CBPO. Applicants to technical...

  9. Distances and isomorphisms in 4-regular circulant graphs

    NASA Astrophysics Data System (ADS)

    Donno, Alfredo; Iacono, Donatella

    2016-06-01

    We compute the Wiener index and the Hosoya polynomial of the Cayley graph of some cyclic groups, with all possible generating sets containing four elements, up to isomorphism. We find out that the order 17 is the smallest case providing two non-isomorphic 4-regular circulant graphs with the same Wiener index. Some open problems and questions are listed.

  10. Exploring How Special and Regular Education Teachers Work Together Collaboratively

    ERIC Educational Resources Information Center

    Broyard-Baptiste, Erin

    2012-01-01

    This study was based on the need for additional research to explore the nature of collaborative teaching experiences in the K-12 education setting. For that reason, this study was designed to examine the experiences and perceptions of special education and regular education teachers with respect to inclusion and the perceptions of these teachers…

  11. The Hearing Impaired Student in the Regular Classroom.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    The guide provides strategies for teachers to use with deaf and hearing impaired (HI) students in regular classrooms in the province of Alberta, Canada. An introductory section includes symptoms of a suspected hearing loss and a sample audiogram to aid teachers in recognizing the problem. Ways to meet special needs at different age levels are…

  12. Acquisition of Formulaic Sequences in Intensive and Regular EFL Programmes

    ERIC Educational Resources Information Center

    Serrano, Raquel; Stengers, Helene; Housen, Alex

    2015-01-01

    This paper aims to analyse the role of time concentration of instructional hours on the acquisition of formulaic sequences in English as a foreign language (EFL). Two programme types that offer the same amount of hours of instruction are considered: intensive (110 hours/1 month) and regular (110 hours/7 months). The EFL learners under study are…

  13. Adult Regularization of Inconsistent Input Depends on Pragmatic Factors

    ERIC Educational Resources Information Center

    Perfors, Amy

    2016-01-01

    In a variety of domains, adults who are given input that is only partially consistent do not discard the inconsistent portion (regularize) but rather maintain the probability of consistent and inconsistent portions in their behavior (probability match). This research investigates the possibility that adults probability match, at least in part,…

  14. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  15. Multiple Learning Strategies Project. Medical Assistant. [Regular Vocational. Vol. 1.

    ERIC Educational Resources Information Center

    Varney, Beverly; And Others

    This instructional package, one of four designed for regular vocational students, focuses on the vocational area of medical assistant. Contained in this document are twenty-six learning modules organized into three units: language; receptioning; and asepsis. Each module includes these elements: a performance objective page telling what will be…

  16. Psychological Benefits of Regular Physical Activity: Evidence from Emerging Adults

    ERIC Educational Resources Information Center

    Cekin, Resul

    2015-01-01

    Emerging adulthood is a transitional stage between late adolescence and young adulthood in life-span development that requires significant changes in people's lives. Therefore, identifying protective factors for this population is crucial. This study investigated the effects of regular physical activity on self-esteem, optimism, and happiness in…

  17. Integrating Handicapped Children Into Regular Classrooms. (With Abstract Bibliography).

    ERIC Educational Resources Information Center

    Glockner, Mary

    This document is based on an interview with Dr. Jenny Klein, Director of Educational Services, Office of Child Development, who stresses the desirability of integrating handicapped children into regular classrooms. She urges the teacher to view the handicapped child as a normal child with some special needs. Specific suggestions for the teacher…

  18. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, G.C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  19. Interrupting Sitting Time with Regular Walks Attenuates Postprandial Triglycerides.

    PubMed

    Miyashita, M; Edamoto, K; Kidokoro, T; Yanaoka, T; Kashiwabara, K; Takahashi, M; Burns, S

    2016-02-01

    We compared the effects of prolonged sitting with the effects of sitting interrupted by regular walking and the effects of prolonged sitting after continuous walking on postprandial triglyceride in postmenopausal women. 15 participants completed 3 trials in random order: 1) prolonged sitting, 2) regular walking, and 3) prolonged sitting preceded by continuous walking. During the sitting trial, participants rested for 8 h. For the walking trials, participants walked briskly in either twenty 90-sec bouts over 8 h or one 30-min bout in the morning (09:00-09:30). Except for walking, both exercise trials mimicked the sitting trial. In each trial, participants consumed a breakfast (08:00) and lunch (11:00). Blood samples were collected in the fasted state and at 2, 4, 6 and 8 h after breakfast. The serum triglyceride incremental area under the curve was 15 and 14% lower after regular walking compared with prolonged sitting and prolonged sitting after continuous walking (4.73±2.50 vs. 5.52±2.95 vs. 5.50±2.59 mmol/L∙8 h respectively, main effect of trial: P=0.023). Regularly interrupting sitting time with brief bouts of physical activity can reduce postprandial triglyceride in postmenopausal women. PMID:26509374

  20. Effect of regular and decaffeinated coffee on serum gastrin levels.

    PubMed

    Acquaviva, F; DeFrancesco, A; Andriulli, A; Piantino, P; Arrigoni, A; Massarenti, P; Balzola, F

    1986-04-01

    We evaluated the hypothesis that the noncaffeine gastric acid stimulant effect of coffee might be by way of serum gastrin release. After 10 healthy volunteers drank 50 ml of coffee solution corresponding to one cup of home-made regular coffee containing 10 g of sugar and 240 mg/100 ml of caffeine, serum total gastrin levels peaked at 10 min and returned to basal values within 30 min; the response was of little significance (1.24 times the median basal value). Drinking 100 ml of sugared water (as control) resulted in occasional random elevations of serum gastrin which were not statistically significant. Drinking 100 ml of regular or decaffeinated coffee resulted in a prompt and lasting elevation of total gastrin; mean integrated outputs after regular or decaffeinated coffee were, respectively, 2.3 and 1.7 times the values in the control test. Regular and decaffeinated coffees share a strong gastrin-releasing property. Neither distension, osmolarity, calcium, nor amino acid content of the coffee solution can account for this property, which should be ascribed to some other unidentified ingredient. This property is at least partially lost during the process of caffeine removal. PMID:3745848

  1. Implicit Learning of L2 Word Stress Regularities

    ERIC Educational Resources Information Center

    Chan, Ricky K. W.; Leung, Janny H. C.

    2014-01-01

    This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…

  2. Integration of Dependent Handicapped Classes into the Regular School.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Guidelines are provided for integrating the dependent handicapped student (DHS) into the regular school in Alberta, Canada. A short overview comprises the introduction. Identified are two types of integration: (1) incidental contact and (2) planned contact for social, recreational, and educational activities with other students. Noted are types of…

  3. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  4. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable...

  5. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  6. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  7. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  8. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... utilized by a cable operator shall be adequate to detect a leakage source which produces a field strength... leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in...

  9. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  10. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  11. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...

  12. Nonnative Processing of Verbal Morphology: In Search of Regularity

    ERIC Educational Resources Information Center

    Gor, Kira; Cook, Svetlana

    2010-01-01

    There is little agreement on the mechanisms involved in second language (L2) processing of regular and irregular inflectional morphology and on the exact role of age, amount, and type of exposure to L2 resulting in differences in L2 input and use. The article contributes to the ongoing debates by reporting the results of two experiments on Russian…

  13. 75 FR 23218 - Information Collection; Direct Loan Servicing-Regular

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-03

    ... loan agreements, assist the borrower in achieving business goals, and regular servicing of the loan... methods: Mail: J. Lee Nault, Loan Specialist, USDA/FSA/FLP, STOP 0523, 1400 Independence Avenue, SW., Washington, DC 20250-0503. E-mail: lee.nault@wdc.usda.gov . Fax: 202-690-0949. You may also send comments...

  14. Cost Effectiveness of Premium Versus Regular Gasoline in MCPS Buses.

    ERIC Educational Resources Information Center

    Baacke, Clifford M.; Frankel, Steven M.

    The primary question posed in this study is whether premium or regular gasoline is more cost effective for the Montgomery County Public School (MCPS) bus fleet, as a whole, when miles-per-gallon, cost-per-gallon, and repair costs associated with mileage are considered. On average, both miles-per-gallon, and repair costs-per-mile favor premium…

  15. The Student with Albinism in the Regular Classroom.

    ERIC Educational Resources Information Center

    Ashley, Julia Robertson

    This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…

  16. Identifying and Exploiting Spatial Regularity in Data Memory References

    SciTech Connect

    Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A; Schulz, M

    2003-07-24

    The growing processor/memory performance gap causes the performance of many codes to be limited by memory accesses. If known to exist in an application, strided memory accesses forming streams can be targeted by optimizations such as prefetching, relocation, remapping, and vector loads. Undetected, they can be a significant source of memory stalls in loops. Existing stream-detection mechanisms either require special hardware, which may not gather statistics for subsequent analysis, or are limited to compile-time detection of array accesses in loops. Formally, little treatment has been accorded to the subject; the concept of locality fails to capture the existence of streams in a program's memory accesses. The contributions of this paper are as follows. First, we define spatial regularity as a means to discuss the presence and effects of streams. Second, we develop measures to quantify spatial regularity, and we design and implement an on-line, parallel algorithm to detect streams - and hence regularity - in running applications. Third, we use examples from real codes and common benchmarks to illustrate how derived stream statistics can be used to guide the application of profile-driven optimizations. Overall, we demonstrate the benefits of our novel regularity metric as a low-cost instrument to detect potential for code optimizations affecting memory performance.

  17. New vision based navigation clue for a regular colonoscope's tip

    NASA Astrophysics Data System (ADS)

    Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy

    2009-02-01

    Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.

  18. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  19. Global Regularity for Several Incompressible Fluid Models with Partial Dissipation

    NASA Astrophysics Data System (ADS)

    Wu, Jiahong; Xu, Xiaojing; Ye, Zhuan

    2016-09-01

    This paper examines the global regularity problem on several 2D incompressible fluid models with partial dissipation. They are the surface quasi-geostrophic (SQG) equation, the 2D Euler equation and the 2D Boussinesq equations. These are well-known models in fluid mechanics and geophysics. The fundamental issue of whether or not they are globally well-posed has attracted enormous attention. The corresponding models with partial dissipation may arise in physical circumstances when the dissipation varies in different directions. We show that the SQG equation with either horizontal or vertical dissipation always has global solutions. This is in sharp contrast with the inviscid SQG equation for which the global regularity problem remains outstandingly open. Although the 2D Euler is globally well-posed for sufficiently smooth data, the associated equations with partial dissipation no longer conserve the vorticity and the global regularity is not trivial. We are able to prove the global regularity for two partially dissipated Euler equations. Several global bounds are also obtained for a partially dissipated Boussinesq system.

  20. Elementary Teachers' Perspectives of Inclusion in the Regular Education Classroom

    ERIC Educational Resources Information Center

    Olinger, Becky Lorraine

    2013-01-01

    The purpose of this qualitative study was to examine regular education and special education teacher perceptions of inclusion services in an elementary school setting. In this phenomenological study, purposeful sampling techniques and data were used to conduct a study of inclusion in the elementary schools. In-depth one-to-one interviews with 8…

  1. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. PMID:23846472

  2. Factors Contributing to Regular Smoking in Adolescents in Turkey

    ERIC Educational Resources Information Center

    Can, Gamze; Topbas, Murat; Oztuna, Funda; Ozgun, Sukru; Can, Emine; Yavuzyilmaz, Asuman

    2009-01-01

    Purpose: The objectives of this study were to determine the levels of lifetime cigarette use, daily use, and current use among young people (aged 15-19 years) and to examine the risk factors contributing to regular smoking. Methods: The number of students was determined proportionately to the numbers of students in all the high schools in the…

  3. Factors Relating to Regular Education Teacher Burnout in Inclusive Education

    ERIC Educational Resources Information Center

    Talmor, Rachel; Reiter, Shunit; Feigin, Neomi

    2005-01-01

    The aims of the research were to identify the environmental factors that relate to the work of regular school teachers who have students with special needs in their classroom, and to find out the correlation between these factors and teacher burnout. A total 330 primary school teachers filled in a questionnaire that had three parts: (1) personal…

  4. 75 FR 13598 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 10 a.m., Monday, March 22, 2010. PLACE: 1325 G Street, NW., Suite 800 Boardroom, Washington, DC 20005. STATUS: Open. CONTACT PERSON...

  5. 76 FR 14699 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 11 a.m., Tuesday, March 22, 2011. PLACE: 1325 G Street, NW., Suite 800, Boardroom, Washington, DC 20005. STATUS: Open. CONTACT PERSON...

  6. 75 FR 59747 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-28

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act Time and Date: 2 p.m.. Wednesday, September 22, 2010. Place: 1325 G Street NW., Suite 800, Boardroom, Washington, DC 20005. Status: Open....

  7. 75 FR 77010 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-10

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act Time AND Date 2:30 p.m., Wednesday, December 15, 2010. Place: 1325 G Street, NW., Suite 800, Boardroom, Washington, DC 20005. Status: Open....

  8. 78 FR 36794 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 9:30 a.m., Tuesday, June 25, 2013. PLACE: 999 North Capitol St NE., Suite 900, Gramlich Boardroom, Washington, DC 20002. STATUS:...

  9. 77 FR 58416 - Regular Board of Directors Meeting; Sunshine Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ... From the Federal Register Online via the Government Publishing Office NEIGHBORHOOD REINVESTMENT CORPORATION Regular Board of Directors Meeting; Sunshine Act TIME AND DATE: 2:00 p.m., Monday, October 1, 2012.... Call to Order II. Executive Session III. Approval of the Annual Board of Directors Meeting Minutes...

  10. Sparse regularization techniques provide novel insights into outcome integration processes.

    PubMed

    Mohr, Holger; Wolfensteller, Uta; Frimmel, Steffi; Ruge, Hannes

    2015-01-01

    By exploiting information that is contained in the spatial arrangement of neural activations, multivariate pattern analysis (MVPA) can detect distributed brain activations which are not accessible by standard univariate analysis. Recent methodological advances in MVPA regularization techniques have made it feasible to produce sparse discriminative whole-brain maps with highly specific patterns. Furthermore, the most recent refinement, the Graph Net, explicitly takes the 3D-structure of fMRI data into account. Here, these advanced classification methods were applied to a large fMRI sample (N=70) in order to gain novel insights into the functional localization of outcome integration processes. While the beneficial effect of differential outcomes is well-studied in trial-and-error learning, outcome integration in the context of instruction-based learning has remained largely unexplored. In order to examine neural processes associated with outcome integration in the context of instruction-based learning, two groups of subjects underwent functional imaging while being presented with either differential or ambiguous outcomes following the execution of varying stimulus-response instructions. While no significant univariate group differences were found in the resulting fMRI dataset, L1-regularized (sparse) classifiers performed significantly above chance and also clearly outperformed the standard L2-regularized (dense) Support Vector Machine on this whole-brain between-subject classification task. Moreover, additional L2-regularization via the Elastic Net and spatial regularization by the Graph Net improved interpretability of discriminative weight maps but were accompanied by reduced classification accuracies. Most importantly, classification based on sparse regularization facilitated the identification of highly specific regions differentially engaged under ambiguous and differential outcome conditions, comprising several prefrontal regions previously associated with

  11. A simple way to measure daily lifestyle regularity

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.

    2002-01-01

    A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5

  12. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM.

  13. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. PMID:27114339

  14. Nonlinear Identification Using Orthogonal Forward Regression With Nested Optimal Regularization.

    PubMed

    Hong, Xia; Chen, Sheng; Gao, Junbin; Harris, Chris J

    2015-12-01

    An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

  15. Application of real rock pore-throat statistics to a regular pore network model

    SciTech Connect

    Sarker, M.R.; McIntyre, D.; Ferer, M.; Siddigui, S.; Bromhal. G.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throat sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results

  16. Application of real rock pore-threat statistics to a regular pore network model

    SciTech Connect

    Rakibul, M.; Sarker, H.; McIntyre, D.; Ferer, M.; Siddiqui, S.; Bromhal. G.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throat sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results

  17. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  18. Light-Front Quantization of the Vector Schwinger Model with a Photon Mass Term in Faddeevian Regularization

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.

    2016-07-01

    In this talk, we study the light-front quantization of the vector Schwinger model with photon mass term in Faddeevian Regularization, describing two-dimensional electrodynamics with mass-less fermions but with a mass term for the U(1) gauge field. This theory is gauge-non-invariant (GNI). We construct a gauge-invariant (GI) theory using Stueckelberg mechanism and then recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-fixing conditions (GFC's). We then study LFQ of this new GI theory.

  19. Inverse problems: Fuzzy representation of uncertainty generates a regularization

    NASA Technical Reports Server (NTRS)

    Kreinovich, V.; Chang, Ching-Chuang; Reznik, L.; Solopchenko, G. N.

    1992-01-01

    In many applied problems (geophysics, medicine, and astronomy) we cannot directly measure the values x(t) of the desired physical quantity x in different moments of time, so we measure some related quantity y(t), and then we try to reconstruct the desired values x(t). This problem is often ill-posed in the sense that two essentially different functions x(t) are consistent with the same measurement results. So, in order to get a reasonable reconstruction, we must have some additional prior information about the desired function x(t). Methods that use this information to choose x(t) from the set of all possible solutions are called regularization methods. In some cases, we know the statistical characteristics both of x(t) and of the measurement errors, so we can apply statistical filtering methods (well-developed since the invention of a Wiener filter). In some situations, we know the properties of the desired process, e.g., we know that the derivative of x(t) is limited by some number delta, etc. In this case, we can apply standard regularization techniques (e.g., Tikhonov's regularization). In many cases, however, we have only uncertain knowledge about the values of x(t), about the rate with which the values of x(t) can change, and about the measurement errors. In these cases, usually one of the existing regularization methods is applied. There exist several heuristics that choose such a method. The problem with these heuristics is that they often lead to choosing different methods, and these methods lead to different functions x(t). Therefore, the results x(t) of applying these heuristic methods are often unreliable. We show that if we use fuzzy logic to describe this uncertainty, then we automatically arrive at a unique regularization method, whose parameters are uniquely determined by the experts knowledge. Although we start with the fuzzy description, but the resulting regularization turns out to be quite crisp.

  20. Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms

    SciTech Connect

    Roychowdhury, V.P.; Kailath, T.

    1989-01-01

    The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.

  1. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  2. 20 CFR 220.17 - Recovery from disability for work in the regular occupation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Work in an Employee's Regular Railroad Occupation § 220.17 Recovery from disability for work in the regular occupation. (a) General. Disability for work in the regular occupation will end if— (1) There is... the duties of his or her regular occupation. The Board provides a trial work period before...

  3. 20 CFR 220.17 - Recovery from disability for work in the regular occupation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Work in an Employee's Regular Railroad Occupation § 220.17 Recovery from disability for work in the regular occupation. (a) General. Disability for work in the regular occupation will end if— (1) There is... the duties of his or her regular occupation. The Board provides a trial work period before...

  4. 20 CFR 216.16 - What is regular non-railroad employment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false What is regular non-railroad employment. 216... regular non-railroad employment. (a) Regular non-railroad employment is full or part-time employment for pay. (b) Regular non-railroad employment does not include any of the following: (1)...

  5. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  6. Regular Expression-Based Learning for METs Value Extraction

    PubMed Central

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E.; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673

  7. Regular Expression-Based Learning for METs Value Extraction.

    PubMed

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians.

  8. Regular Expression-Based Learning for METs Value Extraction.

    PubMed

    Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing

    2016-01-01

    Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673

  9. Total Variation Regularization Used in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Wang, Huaxiang; Tang, Lei

    2007-06-01

    To solve ill-posed problem and poor resolution in electrical capacitance tomography (ECT), a new image reconstruction algorithm based on total variation (TV) regularization is proposed and a new self-adaptive mesh refinement strategy is put forward. Compared with the conventional Tikhonov regularization, this new algorithm not only stabilizes the reconstruction, but also enhances the distinguishability of the reconstruction image in areas with discontinuous medium distribution. It possesses a good edge-preserving property. The self-adaptive mesh generation technique based on this algorithm can refine the mesh automatically in specific areas according to medium distribution. This strategy keeps high resolution as refining all elements over the region but reduces calculation loads, thereby speeds up the reconstruction. Both simulation and experimental results show that this algorithm has advantages in terms of the resolution and real-time performance.

  10. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  11. Vertical Accretion in Microtidal Regularly and Irregularly Flooded Estuarine Marshes

    NASA Astrophysics Data System (ADS)

    Craft, C. B.; Seneca, E. D.; Broome, S. W.

    1993-10-01

    Vertical accretion rates were measured in microtidal (tidal amplitude less than 0·3 m) regularly (flooded twice daily by the astronomical tides), and irregularly flooded (inundated only during spring and storm tides) estuarine marshes in North Carolina to determine whether these marshes are keeping pace with rising sea-level and to quantify the relative contribution of organic matter and mineral sediment to vertical growth. Accretion rates in streamside and backmarsh locations of each marsh were determined by measuring the Cesium-137 ( 137Cs) activity in 2 cm soil depth increments. Soil bulk density, organic carbon (C), total nitrogen (N) and particle density also were measured to estimate rates of accumulation of organic matter (OM), mineral sediment and nutrients. With the exception of the backmarsh location of the regularly flooded marsh, vertical accretion rates in the marshes studied matched or exceeded the recent (1940-80) rate of sea-level rise (1·9 mm year -1) along the North Carolina coast. Accretion rates in the irregularly flooded marsh averaged 3·6 ± 0·5 mm year -1 along the streamside and 2·4 ± 0·2 mm year -1 in the backmarsh. The regularly flooded marsh had lower accretion rates, averaging 2·7 ± 0·3 mm year -1 along the streamside and 0·9 ± 0·2 mm year -1 in the backmarsh. Vertical accretion in the irregularly flooded marsh occurred via in situ production and accumulation of organic matter. Rates of soil OM (196-280 g m -2 year -1), organic C (106-146 g m -2 year -1) and total N (6·9-10·3 g m -2 year -1) accumulation were much higher in the irregularly flooded marsh as compared to the regularly flooded marsh (OM = 51-137 g m -2 year -1, C = 21-59 g m -2 year -1, N = 1·3-4·1 g m -2 year -1). In contrast, vertical accretion in the regularly flooded marsh was sustained by allochthonous inputs of mineral sediment. Inorganic sediment deposition contributed 677-1139 g m -2 year -1 mineral matter to the regularly flooded marsh as compared

  12. Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation.

    PubMed

    Meng, Juan; Hu, Guyu; Li, Dong; Zhang, Yanyan; Pan, Zhisong

    2016-01-01

    Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.

  13. Resolving intravoxel fiber architecture using nonconvex regularized blind compressed sensing

    NASA Astrophysics Data System (ADS)

    Chu, C. Y.; Huang, J. P.; Sun, C. Y.; Liu, W. Y.; Zhu, Y. M.

    2015-03-01

    In diffusion magnetic resonance imaging, accurate and reliable estimation of intravoxel fiber architectures is a major prerequisite for tractography algorithms or any other derived statistical analysis. Several methods have been proposed that estimate intravoxel fiber architectures using low angular resolution acquisitions owing to their shorter acquisition time and relatively low b-values. But these methods are highly sensitive to noise. In this work, we propose a nonconvex regularized blind compressed sensing approach to estimate intravoxel fiber architectures in low angular resolution acquisitions. The method models diffusion-weighted (DW) signals as a sparse linear combination of unfixed reconstruction basis functions and introduces a nonconvex regularizer to enhance the noise immunity. We present a general solving framework to simultaneously estimate the sparse coefficients and the reconstruction basis. Experiments on synthetic, phantom, and real human brain DW images demonstrate the superiority of the proposed approach.

  14. Regular Expressions at Their Best: A Case for Rational Design

    NASA Astrophysics Data System (ADS)

    Le Maout, Vincent

    Regular expressions are often an integral part of program customization and many algorithms have been proposed for transforming them into suitable data structures. These algorithms can be divided into two main classes: backtracking or automaton-based algorithms. Surprisingly, the latter class draws less attention than the former, even though automaton-based algorithms represent the oldest and by far the fastest solutions when carefully designed. Only two open-source automaton-based implementations stand out: PCRE and the recent RE2 from Google. We have developed, and present here, a competitive automaton-based regular expression engine on top of the LGPL C++ Automata Standard Template Library (ASTL), whose efficiency and scalability remain unmatched and which distinguishes itself through a unique and rigorous STL-like design.

  15. Mechanisms of evolution of avalanches in regular graphs.

    PubMed

    Handford, Thomas P; Pérez-Reche, Francisco J; Taraskin, Sergei N

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  16. Mechanisms of evolution of avalanches in regular graphs

    NASA Astrophysics Data System (ADS)

    Handford, Thomas P.; Pérez-Reche, Francisco J.; Taraskin, Sergei N.

    2013-06-01

    A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.

  17. The effect of spacing regularity on visual crowding.

    PubMed

    Saarela, T P; Westheimer, G; Herzog, M H

    2010-08-18

    Crowding limits peripheral visual discrimination and recognition: a target easily identified in isolation becomes impossible to recognize when surrounded by other stimuli, often called flankers. Most accounts of crowding predict less crowding when the target-flanker distance increases. On the other hand, the importance of perceptual organization and target-flanker coherence in crowding has recently received more attention. We investigated the effect of target-flanker spacing on crowding in multi-element stimulus arrays. We show that increasing the average distance between the target and the flankers does not always decrease the amount of crowding but can even sometimes increase it. We suggest that the regularity of inter-element spacing plays an important role in determining the strength of crowding: regular spacing leads to the perception of a single, coherent, texture-like stimulus, making judgments about the individual elements difficult.

  18. Tikhonov regularization-based operational transfer path analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Wei; Lu, Yingying; Zhang, Zhousuo

    2016-06-01

    To overcome ill-posed problems in operational transfer path analysis (OTPA), and improve the stability of solutions, this paper proposes a novel OTPA based on Tikhonov regularization, which considers both fitting degrees and stability of solutions. Firstly, fundamental theory of Tikhonov regularization-based OTPA is presented, and comparative studies are provided to validate the effectiveness on ill-posed problems. Secondly, transfer path analysis and source contribution evaluations for numerical cases studies on spherical radiating acoustical sources are comparatively studied. Finally, transfer path analysis and source contribution evaluations for experimental case studies on a test bed with thin shell structures are provided. This study provides more accurate transfer path analysis for mechanical systems, which can benefit for vibration reduction by structural path optimization. Furthermore, with accurate evaluation of source contributions, vibration monitoring and control by active controlling vibration sources can be effectively carried out.

  19. Statistical regularities in the rank-citation profile of scientists

    PubMed Central

    Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress. PMID:22355696

  20. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  1. Regularizing the r-mode Problem for Nonbarotropic Relativistic Stars

    NASA Technical Reports Server (NTRS)

    Lockitch, Keith H.; Andersson, Nils; Watts, Anna L.

    2004-01-01

    We present results for r-modes of relativistic nonbarotropic stars. We show that the main differential equation, which is formally singular at lowest order in the slow-rotation expansion, can be regularized if one considers the initial value problem rather than the normal mode problem. However, a more physically motivated way to regularize the problem is to include higher order terms. This allows us to develop a practical approach for solving the problem and we provide results that support earlier conclusions obtained for uniform density stars. In particular, we show that there will exist a single r-mode for each permissible combination of 1 and m. We discuss these results and provide some caveats regarding their usefulness for estimates of gravitational-radiation reaction timescales. The close connection between the seemingly singular relativistic r-mode problem and issues arising because of the presence of co-rotation points in differentially rotating stars is also clarified.

  2. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    NASA Astrophysics Data System (ADS)

    Hopper, Christopher P.

    2016-10-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  3. Total variation regularization for bioluminescence tomography with the split Bregman method.

    PubMed

    Feng, Jinchao; Qin, Chenghu; Jia, Kebin; Zhu, Shouping; Liu, Kai; Han, Dong; Yang, Xin; Gao, Quansheng; Tian, Jie

    2012-07-01

    Regularization methods have been broadly applied to bioluminescence tomography (BLT) to obtain stable solutions, including l2 and l1 regularizations. However, l2 regularization can oversmooth reconstructed images and l1 regularization may sparsify the source distribution, which degrades image quality. In this paper, the use of total variation (TV) regularization in BLT is investigated. Since a nonnegativity constraint can lead to improved image quality, the nonnegative constraint should be considered in BLT. However, TV regularization with a nonnegativity constraint is extremely difficult to solve due to its nondifferentiability and nonlinearity. The aim of this work is to validate the split Bregman method to minimize the TV regularization problem with a nonnegativity constraint for BLT. The performance of split Bregman-resolved TV (SBRTV) based BLT reconstruction algorithm was verified with numerical and in vivo experiments. Experimental results demonstrate that the SBRTV regularization can provide better regularization quality over l2 and l1 regularizations.

  4. Knowing More than One Can Say: The Early Regular Plural

    ERIC Educational Resources Information Center

    Zapf, Jennifer A.; Smith, Linda B.

    2009-01-01

    This paper reports on partial knowledge in two-year-old children's learning of the regular English plural. In Experiments 1 and 2, children were presented with one kind and its label and then were either presented with two of that same kind (A[right arrow]AA) or the initial picture next to a very different thing (A[right arrow]AB). The children in…

  5. Visual Mismatch Negativity Reveals Automatic Detection of Sequential Regularity Violation

    PubMed Central

    Stefanics, Gábor; Kimura, Motohiro; Czigler, István

    2011-01-01

    Sequential regularities are abstract rules based on repeating sequences of environmental events, which are useful to make predictions about future events. Here, we tested whether the visual system is capable to detect sequential regularity in unattended stimulus sequences. The visual mismatch negativity (vMMN) component of the event-related potentials is sensitive to the violation of complex regularities (e.g., object-related characteristics, temporal patterns). We used the vMMN component as an index of violation of conditional (if, then) regularities. In the first experiment, to investigate emergence of vMMN and other change-related activity to the violation of conditional rules, red and green disk patterns were delivered in pairs. The majority of pairs comprised of disk patterns with identical colors, whereas in deviant pairs the colors were different. The probabilities of the two colors were equal. The second member of the deviant pairs elicited a vMMN with longer latency and more extended spatial distribution to deviants with lower probability (10 vs. 30%). In the second (control) experiment the emergence of vMMN to violation of a simple, feature-related rule was studied using oddball sequences of stimulus pairs where deviant colors were presented with 20% probabilities. Deviant colored patterns elicited a vMMN, and this component was larger for the second member of the pair, i.e., after a shorter inter-stimulus interval. This result corresponds to the SOA/(v)MMN relationship, expected on the basis of a memory-mismatch process. Our results show that the system underlying vMMN is sensitive to abstract, conditional rules. Representation of such rules implicates expectation of a subsequent event, therefore vMMN can be considered as a correlate of violated predictions about the characteristics of environmental events. PMID:21629766

  6. Effects of regular exercise training on skeletal muscle contractile function

    NASA Technical Reports Server (NTRS)

    Fitts, Robert H.

    2003-01-01

    Skeletal muscle function is critical to movement and one's ability to perform daily tasks, such as eating and walking. One objective of this article is to review the contractile properties of fast and slow skeletal muscle and single fibers, with particular emphasis on the cellular events that control or rate limit the important mechanical properties. Another important goal of this article is to present the current understanding of how the contractile properties of limb skeletal muscle adapt to programs of regular exercise.

  7. Holographic Wilson loops, Hamilton-Jacobi equation, and regularizations

    NASA Astrophysics Data System (ADS)

    Pontello, Diego; Trinchero, Roberto

    2016-04-01

    The minimal area for surfaces whose borders are rectangular and circular loops are calculated using the Hamilton-Jacobi (HJ) equation. This amounts to solving the HJ equation for the value of the minimal area, without calculating the shape of the corresponding surface. This is done for bulk geometries that are asymptotically anti-de Sitter (AdS). For the rectangular contour, the HJ equation, which is separable, can be solved exactly. For the circular contour an expansion in powers of the radius is implemented. The HJ approach naturally leads to a regularization which consists in locating the contour away from the border. The results are compared with the ɛ -regularization which leaves the contour at the border and calculates the area of the corresponding minimal surface up to a diameter smaller than the one of the contour at the border. The results for the circular loop do not coincide if the expansion parameter is taken to be the radius of the contour at the border. It is shown that using this expansion parameter the ɛ -regularization leads to incorrect results for certain solvable non-AdS cases. However, if the expansion parameter is taken to be the radius of the minimal surface whose area is computed, then the results coincide with the HJ scheme. This is traced back to the fact that in the HJ case the expansion parameter for the area of a minimal surface is intrinsic to the surface; however, the radius of the contour at the border is related to the way one chooses to regularize in the ɛ -scheme the calculation of this area.

  8. Nonlinear Regularizing Effect for Hyperbolic Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Golse, François

    2010-03-01

    The Tartar-DiPerna compensated compactness method, used initially to construct global weak solutions of hyperbolic systems of conservation laws for large data, can be adapted in order to provide some regularity estimates on these solutions. This note treats two examples: (a) the case of scalar conservation laws with convex flux, and (b) the Euler system for a polytropic, compressible fluid, in space dimension one.

  9. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments.

  10. Comparison of regularization methods for human cardiac diffusion tensor MRI.

    PubMed

    Frindel, Carole; Robini, Marc; Croisille, Pierre; Zhu, Yue-Min

    2009-06-01

    Diffusion tensor MRI (DT-MRI) is an imaging technique that is gaining importance in clinical applications. However, there is very little work concerning the human heart. When applying DT-MRI to in vivo human hearts, the data have to be acquired rapidly to minimize artefacts due to cardiac and respiratory motion and to improve patient comfort, often at the expense of image quality. This results in diffusion weighted (DW) images corrupted by noise, which can have a significant impact on the shape and orientation of tensors and leads to diffusion tensor (DT) datasets that are not suitable for fibre tracking. This paper compares regularization approaches that operate either on diffusion weighted images or on diffusion tensors. Experiments on synthetic data show that, for high signal-to-noise ratio (SNR), the methods operating on DW images produce the best results; they substantially reduce noise error propagation throughout the diffusion calculations. However, when the SNR is low, Rician Cholesky and Log-Euclidean DT regularization methods handle the bias introduced by Rician noise and ensure symmetry and positive definiteness of the tensors. Results based on a set of sixteen ex vivo human hearts show that the different regularization methods tend to provide equivalent results. PMID:19356971

  11. Channeling power across ecological systems: social regularities in community organizing.

    PubMed

    Christens, Brian D; Inzeo, Paula Tran; Faust, Victoria

    2014-06-01

    Relational and social network perspectives provide opportunities for more holistic conceptualizations of phenomena of interest in community psychology, including power and empowerment. In this article, we apply these tools to build on multilevel frameworks of empowerment by proposing that networks of relationships between individuals constitute the connective spaces between ecological systems. Drawing on an example of a model for grassroots community organizing practiced by WISDOM—a statewide federation supporting local community organizing initiatives in Wisconsin—we identify social regularities (i.e., relational and temporal patterns) that promote empowerment and the development and exercise of social power through building and altering relational ties. Through an emphasis on listening-focused one-to-one meetings, reflection, and social analysis, WISDOM organizing initiatives construct and reinforce social regularities that develop social power in the organizing initiatives and advance psychological empowerment among participant leaders in organizing. These patterns are established by organizationally driven brokerage and mobilization of interpersonal ties, some of which span ecological systems.Hence, elements of these power-focused social regularities can be conceptualized as cross-system channels through which micro-level empowerment processes feed into macro-level exercise of social power, and vice versa. We describe examples of these channels in action, and offer recommendations for theory and design of future action research [corrected] .

  12. Hessian-Regularized Co-Training for Social Activity Recognition

    PubMed Central

    Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang

    2014-01-01

    Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms. PMID:25259945

  13. Nonrigid registration using regularization that accomodates local tissue rigidity

    NASA Astrophysics Data System (ADS)

    Ruan, Dan; Fessler, Jeffrey A.; Roberson, Michael; Balter, James; Kessler, Marc

    2006-03-01

    Regularized nonrigid medical image registration algorithms usually estimate the deformation by minimizing a cost function, consisting of a similarity measure and a penalty term that discourages "unreasonable" deformations. Conventional regularization methods enforce homogeneous smoothness properties of the deformation field; less work has been done to incorporate tissue-type-specific elasticity information. Yet ignoring the elasticity differences between tissue types can result in non-physical results, such as bone warping. Bone structures should move rigidly (locally), unlike the more elastic deformation of soft issues. Existing solutions for this problem either treat different regions of an image independently, which requires precise segmentation and incurs boundary issues; or use an empirical spatial varying "filter" to "correct" the deformation field, which requires the knowledge of a stiffness map and departs from the cost-function formulation. We propose a new approach to incorporate tissue rigidity information into the nonrigid registration problem, by developing a space variant regularization function that encourages the local Jacobian of the deformation to be a nearly orthogonal matrix in rigid image regions, while allowing more elastic deformations elsewhere. For the case of X-ray CT data, we use a simple monotonic increasing function of the CT numbers (in HU) as a "rigidity index" since bones typically have the highest CT numbers. Unlike segmentation-based methods, this approach is flexible enough to account for partial volume effects. Results using a B-spline deformation parameterization illustrate that the proposed approach improves registration accuracy in inhale-exhale CT scans with minimal computational penalty.

  14. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments. PMID:26363844

  15. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  16. Impact on asteroseismic analyses of regular gaps in Kepler data

    NASA Astrophysics Data System (ADS)

    García, R. A.; Mathur, S.; Pires, S.; Régulo, C.; Bellamy, B.; Pallé, P. L.; Ballot, J.; Barceló Forteza, S.; Beck, P. G.; Bedding, T. R.; Ceillier, T.; Roca Cortés, T.; Salabert, D.; Stello, D.

    2014-08-01

    Context. The NASA Kepler mission has observed more than 190 000 stars in the constellations of Cygnus and Lyra. Around 4 years of almost continuous ultra high-precision photometry have been obtained reaching a duty cycle higher than 90% for many of these stars. However, almost regular gaps due to nominal operations are present in the light curves on different time scales. Aims: In this paper we want to highlight the impact of those regular gaps in asteroseismic analyses, and we try to find a method that minimizes their effect on the frequency domain. Methods: To do so, we isolate the two main time scales of quasi regular gaps in the data. We then interpolate the gaps and compare the power density spectra of four different stars: two red giants at different stages of their evolution, a young F-type star, and a classical pulsator in the instability strip. Results: The spectra obtained after filling the gaps in the selected solar-like stars show a net reduction in the overall background level, as well as a change in the background parameters. The inferred convective properties could change as much as ~200% in the selected example, introducing a bias in the p-mode frequency of maximum power. When asteroseismic scaling relations are used, this bias can lead to a variation in the surface gravity of 0.05 dex. Finally, the oscillation spectrum in the classical pulsator is cleaner than the original one.

  17. Regularity and predictability of human mobility in personal space.

    PubMed

    Austin, Daniel; Cross, Robin M; Hayes, Tamara; Kaye, Jeffrey

    2014-01-01

    Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity) is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.

  18. Manifold regularized multitask feature learning for multimodality disease classification.

    PubMed

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-02-01

    Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. PMID:25277605

  19. Identification and sorting of regular textures according to their similarity

    NASA Astrophysics Data System (ADS)

    Hernández Mesa, Pilar; Anastasiadis, Johannes; Puente León, Fernando

    2015-05-01

    Regardless whether mosaics, material surfaces or skin surfaces are inspected their texture plays an important role. Texture is a property which is hard to describe using words but it can easily be described in pictures. Furthermore, a huge amount of digital images containing a visual description of textures already exists. However, this information becomes useless if there are no appropriate methods to browse the data. In addition, depending on the given task some properties like scale, rotation or intensity invariance are desired. In this paper we propose to analyze texture images according to their characteristic pattern. First a classification approach is proposed to separate regular from non-regular textures. The second stage will focus on regular textures suggesting a method to sort them according to their similarity. Different features will be extracted from the texture in order to describe its scale, orientation, texel and the texel's relative position. Depending on the desired invariance of the visual characteristics (like the texture's scale or the texel's form invariance) the comparison of the features between images will be weighted and combined to define the degree of similarity between them. Tuning the weighting parameters allows this search algorithm to be easily adapted to the requirements of the desired task. Not only the total invariance of desired parameters can be adjusted, the weighting of the parameters may also be modified to adapt to an application-specific type of similarity. This search method has been evaluated using different textures and similarity criteria achieving very promising results.

  20. Isotropic model for cluster growth on a regular lattice

    NASA Astrophysics Data System (ADS)

    Yates, Christian A.; Baker, Ruth E.

    2013-08-01

    There exists a plethora of mathematical models for cluster growth and/or aggregation on regular lattices. Almost all suffer from inherent anisotropy caused by the regular lattice upon which they are grown. We analyze the little-known model for stochastic cluster growth on a regular lattice first introduced by Ferreira Jr. and Alves [J. Stat. Mech. Theo. & Exp.1742-546810.1088/1742-5468/2006/11/P11007 (2006) P11007], which produces circular clusters with no discernible anisotropy. We demonstrate that even in the noise-reduced limit the clusters remain circular. We adapt the model by introducing a specific rearrangement algorithm so that, rather than adding elements to the cluster from the outside (corresponding to apical growth), our model uses mitosis-like cell splitting events to increase the cluster size. We analyze the surface scaling properties of our model and compare it to the behavior of more traditional models. In “1+1” dimensions we discover and explore a new, nonmonotonic surface thickness scaling relationship which differs significantly from the Family-Vicsek scaling relationship. This suggests that, for models whose clusters do not grow through particle additions which are solely dependent on surface considerations, the traditional classification into “universality classes” may not be appropriate.

  1. Invariant regularization of anomaly-free chiral theories

    NASA Astrophysics Data System (ADS)

    Chang, Lay Nam; Soo, Chopin

    1997-02-01

    We present a generalization of the Frolov-Slavnov invariant regularization scheme for chiral fermion theories in curved spacetimes. The Lagrangian level regularization is explicitly invariant under all the local gauge symmetries of the theory, including local Lorentz invariance. The perturbative scheme works for arbitrary representations which satisfy the chiral gauge anomaly and the mixed Lorentz-gauge anomaly cancellation conditions. Anomalous theories on the other hand manifest themselves by having divergent fermion loops which remain unregularized by the scheme. Since the invariant scheme is promoted to include also local Lorentz invariance, spectator fields which do not couple to gravity cannot be, and are not, introduced. Furthermore, the scheme is truly chiral (Weyl) in that all fields, including the regulators, are left handed; and only the left-handed spin connection is needed. The scheme is, therefore, well suited for the study of the interaction of matter with all four known forces in a completely chiral fashion. In contrast with the vectorlike formulation, the degeneracy between the Adler-Bell-Jackiw current and the fermion number current in the bare action is preserved by the chiral regularization scheme.

  2. 1200 years of regular outbreaks in alpine insects

    PubMed Central

    Esper, Jan; Büntgen, Ulf; Frank, David C; Nievergelt, Daniel; Liebhold, Andrew

    2006-01-01

    The long-term history of Zeiraphera diniana Gn. (the larch budmoth, LBM) outbreaks was reconstructed from tree rings of host subalpine larch in the European Alps. This record was derived from 47 513 maximum latewood density measurements, and highlights the impact of contemporary climate change on ecological disturbance regimes. With over 1000 generations represented, this is the longest annually resolved record of herbivore population dynamics, and our analysis demonstrates that remarkably regular LBM fluctuations persisted over the past 1173 years with population peaks averaging every 9.3 years. These regular abundance oscillations recurred until 1981, with the absence of peak events during recent decades. Comparison with an annually resolved, millennium-long temperature reconstruction representative for the European Alps (r=0.72, correlation with instrumental data) demonstrates that regular insect population cycles continued despite major climatic changes related to warming during medieval times and cooling during the Little Ice Age. The late twentieth century absence of LBM outbreaks, however, corresponds to a period of regional warmth that is exceptional with respect to the last 1000+ years, suggesting vulnerability of an otherwise stable ecological system in a warming environment. PMID:17254991

  3. Path integral regularization of pure Yang-Mills theory

    SciTech Connect

    Jacquot, J. L.

    2009-07-15

    In enlarging the field content of pure Yang-Mills theory to a cutoff dependent matrix valued complex scalar field, we construct a vectorial operator, which is by definition invariant with respect to the gauge transformation of the Yang-Mills field and with respect to a Stueckelberg type gauge transformation of the scalar field. This invariant operator converges to the original Yang-Mills field as the cutoff goes to infinity. With the help of cutoff functions, we construct with this invariant a regularized action for the pure Yang-Mills theory. In order to be able to define both the gauge and scalar fields kinetic terms, other invariant terms are added to the action. Since the scalar fields flat measure is invariant under the Stueckelberg type gauge transformation, we obtain a regularized gauge-invariant path integral for pure Yang-Mills theory that is mathematically well defined. Moreover, the regularized Ward-Takahashi identities describing the dynamics of the gauge fields are exactly the same as the formal Ward-Takahashi identities of the unregularized theory.

  4. Matter conditions for regular black holes in f (T ) gravity

    NASA Astrophysics Data System (ADS)

    Aftergood, Joshua; DeBenedictis, Andrew

    2014-12-01

    We study the conditions imposed on matter to produce a regular (nonsingular) interior of a class of spherically symmetric black holes in the f (T ) extension of teleparallel gravity. The class of black holes studied (T spheres) is necessarily singular in general relativity. We derive a tetrad which is compatible with the black hole interior and utilize this tetrad in the gravitational equations of motion to study the black hole interior. It is shown that in the case where the gravitational Lagrangian is expandable in a power series f (T )=T +∑ n ≠1 bnTn black holes can be nonsingular while respecting certain energy conditions in the matter fields. Thus, the black hole singularity may be removed, and the gravitational equations of motion can remain valid throughout the manifold. This is true as long as n is positive but is not true in the negative sector of the theory. Hence, gravitational f (T ) Lagrangians which are Taylor expandable in powers of T may yield regular black holes of this type. Although it is found that these black holes can be rendered nonsingular in f (T ) theory, we conjecture that a mild singularity theorem holds in that the dominant energy condition is violated in an arbitrarily small neighborhood of the general relativity singular point if the corresponding f (T ) black hole is regular. The analytic techniques here can also be applied to gravitational Lagrangians which are not Laurent or Taylor expandable.

  5. Regularization of multi-soliton form factors in sine-Gordon model

    NASA Astrophysics Data System (ADS)

    Pálmai, T.

    2012-08-01

    A general and systematic regularization is developed for the exact solitonic form factors of exponential operators in the (1+1)-dimensional sine-Gordon model by analytical continuation of their integral representations. The procedure is implemented in Mathematica. Test results are shown for four- and six-soliton form factors. Catalogue identifier: AEMG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1462 No. of bytes in distributed program, including test data, etc.: 15 488 Distribution format: tar.gz Programming language: Mathematica [1] Computer: PC Operating system: Cross-platform Classification: 7.7, 11.1, 23 Nature of problem: The multi-soliton form factors of the sine-Gordon model (relevant in two-dimensional physics) were given only by highly non-trivial integral representation with a limited domain of convergence. Practical applications of the form factors, e.g. calculation of correlation functions in two-dimensional condensed matter systems, were not possible in general. Solution method: Using analytic continuation techniques an efficient algorithm is found and implemented in Mathematica, which provides a general and systematic way to calculate multi-soliton form factors in the sine-Gordon model. The package contains routines to compute the two-, four- and six-soliton form factors. Running time: Strongly dependent on the desired accuracy and the number of solitons. For physical rapidities after an initialization of about 30 s, the calculation of the two-, four- and six-soliton form factors at a single point takes approximately 0.5 s, 2.5 s and 8 s, respectively. Wolfram Research, Inc., Mathematica Edition: Version 7.0, Wolfram Research, Inc., Champaign, Illinois, 2008.

  6. Regular biorthogonal pairs and pseudo-bosonic operators

    NASA Astrophysics Data System (ADS)

    Inoue, H.; Takakura, M.

    2016-08-01

    The first purpose of this paper is to show a method of constructing a regular biorthogonal pair based on the commutation rule: ab - ba = I for a pair of operators a and b acting on a Hilbert space H with inner product (ṡ| ṡ ). Here, sequences {ϕn} and {ψn} in a Hilbert space H are biorthogonal if (ϕn|ψm) = δnm, n, m = 0, 1, …, and they are regular if both Dϕ ≡ Span{ϕn} and Dψ ≡ Span{ψn} are dense in H . Indeed, the assumptions to construct the regular biorthogonal pair coincide with the definition of pseudo-bosons as originally given in F. Bagarello ["Pseudobosons, Riesz bases, and coherent states," J. Math. Phys. 51, 023531 (2010)]. Furthermore, we study the connections between the pseudo-bosonic operators a, b, a†, b† and the pseudo-bosonic operators defined by a regular biorthogonal pair ({ϕn}, {ψn}) and an ONB e of H in H. Inoue ["General theory of regular biorthogonal pairs and its physical applications," e-print arXiv:math-ph/1604.01967]. The second purpose is to define and study the notion of D -pseudo-bosons in F. Bagarello ["More mathematics for pseudo-bosons," J. Math. Phys. 54, 063512 (2013)] and F. Bagarello ["From self-adjoint to non self-adjoint harmonic oscillators: Physical consequences and mathematical pitfalls," Phys. Rev. A 88, 032120 (2013)] and give a method of constructing D -pseudo-bosons on some steps. Then it is shown that for any ONB e = {en} in H and any operators T and T-1 in L † ( D ) , we may construct operators A and B satisfying D -pseudo bosons, where D is a dense subspace in a Hilbert space H and L † ( D ) the set of all linear operators T from D to D such that T * D ⊂ D , where T* is the adjoint of T. Finally, we give some physical examples of D -pseudo-bosons based on standard bosons by the method of constructing D -pseudo-bosons stated above.

  7. A Remark on the Two-Dimensional Magneto-Hydrodynamics-Alpha System

    NASA Astrophysics Data System (ADS)

    Yamazaki, Kazuo

    2016-09-01

    We study the generalized magneto-hydrodynamics-{α} system in two dimensional space with fractional Laplacians in the dissipative and diffusive terms. We show that the solution pair of velocity and magnetic fields preserves their initial regularity in all cases when the powers add up to one. This settles the global regularity issue in the general case which was remarked by the authors in Zhao and Zhu (Appl Math Lett 29:26-29, 2014) to be a problem.

  8. A quantitative comparison of the effects of stabilizing functionals in 3D regularized inversion of marine CSEM data

    NASA Astrophysics Data System (ADS)

    Wilson, G. A.; Cuma, M.; Zhdanov, M. S.; Gribenko, A.; Black, N.

    2010-12-01

    Three-dimensional (3D) inversion is required for defining 3D geoelectric structures associated with hydrocarbon (HC) deposits from marine controlled-source electromagnetic (CSEM) data. In 3D inversion, regularization is introduced to ensure uniqueness and stability in the inverse model. However, a common misconception is that regularization implies smoothing of the inverse model when in fact regularization and the stabilizing functionals are used to select the class of model from which an inverse solution is sought. Smooth stabilizers represent just one inverse model class from which the minimum norm or first or second derivatives of the 3D resistivity distribution are minimized. Smooth stabilizers have limited physical basis in geological interpretation aimed at exploration for HC reservoirs. Focusing stabilizers on the other hand make it possible to recover subsurface models with sharp resistivity contrasts which are typical for HC reservoirs. Using a synthetic example of the stacked anticlinal structures and reservoir units of the Shtokman gas field in the Barents Sea, we demonstrate that focusing stabilizers not only recover more geologically meaningful models than smooth stabilizers, but they provide better convergence for iterative inversion. This makes it practical to run multiple inversion scenarios based on the suite of a priori models, different data combinations, and various other parameters so as to build confidence in the recovered 3D resistivity model and to discriminate any artifacts that may arise from the interpretation of a single 3D inversion result.

  9. A PDE-Based Regularization Algorithm Toward Reducing Speckle Tracking Noise: A Feasibility Study for Ultrasound Breast Elastography.

    PubMed

    Guo, Li; Xu, Yan; Xu, Zhengfu; Jiang, Jingfeng

    2015-10-01

    Obtaining accurate ultrasonically estimated displacements along both axial (parallel to the acoustic beam) and lateral (perpendicular to the beam) directions is an important task for various clinical elastography applications (e.g., modulus reconstruction and temperature imaging). In this study, a partial differential equation (PDE)-based regularization algorithm was proposed to enhance motion tracking accuracy. More specifically, the proposed PDE-based algorithm, utilizing two-dimensional (2D) displacement estimates from a conventional elastography system, attempted to iteratively reduce noise contained in the original displacement estimates by mathematical regularization. In this study, tissue incompressibility was the physical constraint used by the above-mentioned mathematical regularization. This proposed algorithm was tested using computer-simulated data, a tissue-mimicking phantom, and in vivo breast lesion data. Computer simulation results demonstrated that the method significantly improved the accuracy of lateral tracking (e.g., a factor of 17 at 0.5% compression). From in vivo breast lesion data investigated, we have found that, as compared with the conventional method, higher quality axial and lateral strain images (e.g., at least 78% improvements among the estimated contrast-to-noise ratios of lateral strain images) were obtained. Our initial results demonstrated that this conceptually and computationally simple method could be useful for improving the image quality of ultrasound elastography with current clinical equipment as a post-processing tool.

  10. Disentangling regular and chaotic motion in the standard map using complex network analysis of recurrences in phase space.

    PubMed

    Zou, Yong; Donner, Reik V; Thiel, Marco; Kurths, Jürgen

    2016-02-01

    Recurrence in the phase space of complex systems is a well-studied phenomenon, which has provided deep insights into the nonlinear dynamics of such systems. For dissipative systems, characteristics based on recurrence plots have recently attracted much interest for discriminating qualitatively different types of dynamics in terms of measures of complexity, dynamical invariants, or even structural characteristics of the underlying attractor's geometry in phase space. Here, we demonstrate that the latter approach also provides a corresponding distinction between different co-existing dynamical regimes of the standard map, a paradigmatic example of a low-dimensional conservative system. Specifically, we show that the recently developed approach of recurrence network analysis provides potentially useful geometric characteristics distinguishing between regular and chaotic orbits. We find that chaotic orbits in an intermittent laminar phase (commonly referred to as sticky orbits) have a distinct geometric structure possibly differing in a subtle way from those of regular orbits, which is highlighted by different recurrence network properties obtained from relatively short time series. Thus, this approach can help discriminating regular orbits from laminar phases of chaotic ones, which presents a persistent challenge to many existing chaos detection techniques. PMID:26931601

  11. Disentangling regular and chaotic motion in the standard map using complex network analysis of recurrences in phase space.

    PubMed

    Zou, Yong; Donner, Reik V; Thiel, Marco; Kurths, Jürgen

    2016-02-01

    Recurrence in the phase space of complex systems is a well-studied phenomenon, which has provided deep insights into the nonlinear dynamics of such systems. For dissipative systems, characteristics based on recurrence plots have recently attracted much interest for discriminating qualitatively different types of dynamics in terms of measures of complexity, dynamical invariants, or even structural characteristics of the underlying attractor's geometry in phase space. Here, we demonstrate that the latter approach also provides a corresponding distinction between different co-existing dynamical regimes of the standard map, a paradigmatic example of a low-dimensional conservative system. Specifically, we show that the recently developed approach of recurrence network analysis provides potentially useful geometric characteristics distinguishing between regular and chaotic orbits. We find that chaotic orbits in an intermittent laminar phase (commonly referred to as sticky orbits) have a distinct geometric structure possibly differing in a subtle way from those of regular orbits, which is highlighted by different recurrence network properties obtained from relatively short time series. Thus, this approach can help discriminating regular orbits from laminar phases of chaotic ones, which presents a persistent challenge to many existing chaos detection techniques.

  12. Dimensionality reduction in Bayesian estimation algorithms

    NASA Astrophysics Data System (ADS)

    Petty, G. W.

    2013-03-01

    An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M) of pseudochannels while also regularizing the background (geophysical plus instrument) noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals - whether Bayesian or not - lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  13. Dimensionality reduction in Bayesian estimation algorithms

    NASA Astrophysics Data System (ADS)

    Petty, G. W.

    2013-09-01

    An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M) of pseudochannels while also regularizing the background (geophysical plus instrument) noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals - whether Bayesian or not - lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  14. Microphysical aerosol parameters of spheroidal particles via regularized inversion of lidar data

    NASA Astrophysics Data System (ADS)

    Samaras, Stefanos; Böckmann, Christine

    2015-04-01

    One of the main topics in understanding the aerosol impact on climate requires the investigation of the spatial and temporal variability of microphysical properties of particles, e.g., the complex refractive index, the effective radius, the volume and surface-area concentration, and the single-scattering albedo. Remote sensing is a technique used to monitor aerosols in global coverage and fill in the observational gap. This research topic involves using multi-wavelength Raman lidar systems to extract the microphysical properties of aerosol particles, along with depolarization signals to account for the non-sphericity of the latter. Given, the optical parameters (measured by a lidar), the kernel functions, which summarize the size, shape and composition of particles, we solve for the size distribution of the particles modeled by a Fredholm integral system and further calculate the refractive index. This model works well for spherical particles (e.g. smoke); the kernel functions are derived from relatively simplified formulas (Mie scattering theory) and research has led to successful retrievals for particles which at least resemble a spherical geometry (small depolarization ratio). Obviously, more complicated atmospheric structures (e.g dust) require employment of non-spherical kernels and/or more complicated models which are investigated in this paper. The new model is now a two-dimensional one including the aspect ratio of spheroidal particles. The spheroidal kernel functions are able to be calculated via T-Matrix; a technique used for computing electromagnetic scattering by single, homogeneous, arbitrarily shaped particles. In order to speed up the process and massively perform simulation tests, we created a software interface using different regularization methods and parameter choice rules. The following methods have been used: Truncated singular value decomposition and Pade iteration with the discrepancy principle, and Tikhonov regularization with the L

  15. Regular flow reversals in Rayleigh-Bénard convection in a horizontal magnetic field.

    PubMed

    Tasaka, Yuji; Igaki, Kazuto; Yanagisawa, Takatoshi; Vogt, Tobias; Zuerner, Till; Eckert, Sven

    2016-04-01

    Magnetohydrodynamic Rayleigh-Bénard convection was studied experimentally using a liquid metal inside a box with a square horizontal cross section and aspect ratio of five. Systematic flow measurements were performed by means of ultrasonic velocity profiling that can capture time variations of instantaneous velocity profiles. Applying a horizontal magnetic field organizes the convective motion into a flow pattern of quasi-two-dimensional rolls arranged parallel to the magnetic field. The number of rolls has the tendency to decrease with increasing Rayleigh number Ra and to increase with increasing Chandrasekhar number Q. We explored convection regimes in a parameter range, at 2×10^{3}regular flow reversals in which five rolls periodically change the direction of their circulation with gradual skew of the roll axes can be considered as the most remarkable one. The regime appears around a range of Ra/Q=10, where irregular flow reversals were observed in Yanagisawa et al. We performed the proper orthogonal decomposition (POD) analysis on the spatiotemporal velocity distribution and detected that the regular flow reversals can be interpreted as a periodic emergence of a four-roll state in a dominant five-roll state. The POD analysis also provides the definition of the effective number of rolls as a more objective approach. PMID:27176392

  16. Regular flow reversals in Rayleigh-Bénard convection in a horizontal magnetic field

    NASA Astrophysics Data System (ADS)

    Tasaka, Yuji; Igaki, Kazuto; Yanagisawa, Takatoshi; Vogt, Tobias; Zuerner, Till; Eckert, Sven

    2016-04-01

    Magnetohydrodynamic Rayleigh-Bénard convection was studied experimentally using a liquid metal inside a box with a square horizontal cross section and aspect ratio of five. Systematic flow measurements were performed by means of ultrasonic velocity profiling that can capture time variations of instantaneous velocity profiles. Applying a horizontal magnetic field organizes the convective motion into a flow pattern of quasi-two-dimensional rolls arranged parallel to the magnetic field. The number of rolls has the tendency to decrease with increasing Rayleigh number Ra and to increase with increasing Chandrasekhar number Q . We explored convection regimes in a parameter range, at 2 ×103regular flow reversals in which five rolls periodically change the direction of their circulation with gradual skew of the roll axes can be considered as the most remarkable one. The regime appears around a range of Ra /Q =10 , where irregular flow reversals were observed in Yanagisawa et al. We performed the proper orthogonal decomposition (POD) analysis on the spatiotemporal velocity distribution and detected that the regular flow reversals can be interpreted as a periodic emergence of a four-roll state in a dominant five-roll state. The POD analysis also provides the definition of the effective number of rolls as a more objective approach.

  17. Regular flow reversals in Rayleigh-Bénard convection in a horizontal magnetic field.

    PubMed

    Tasaka, Yuji; Igaki, Kazuto; Yanagisawa, Takatoshi; Vogt, Tobias; Zuerner, Till; Eckert, Sven

    2016-04-01

    Magnetohydrodynamic Rayleigh-Bénard convection was studied experimentally using a liquid metal inside a box with a square horizontal cross section and aspect ratio of five. Systematic flow measurements were performed by means of ultrasonic velocity profiling that can capture time variations of instantaneous velocity profiles. Applying a horizontal magnetic field organizes the convective motion into a flow pattern of quasi-two-dimensional rolls arranged parallel to the magnetic field. The number of rolls has the tendency to decrease with increasing Rayleigh number Ra and to increase with increasing Chandrasekhar number Q. We explored convection regimes in a parameter range, at 2×10^{3}regular flow reversals in which five rolls periodically change the direction of their circulation with gradual skew of the roll axes can be considered as the most remarkable one. The regime appears around a range of Ra/Q=10, where irregular flow reversals were observed in Yanagisawa et al. We performed the proper orthogonal decomposition (POD) analysis on the spatiotemporal velocity distribution and detected that the regular flow reversals can be interpreted as a periodic emergence of a four-roll state in a dominant five-roll state. The POD analysis also provides the definition of the effective number of rolls as a more objective approach.

  18. Auditory feedback in error-based learning of motor regularity.

    PubMed

    van Vugt, Floris T; Tillmann, Barbara

    2015-05-01

    Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications.

  19. Quantification of fetal heart rate regularity using symbolic dynamics

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.

    2007-03-01

    Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to

  20. Applying molecular immunohaematology to regularly transfused thalassaemic patients in Thailand

    PubMed Central

    Rujirojindakul, Pairaya; Flegel, Willy A.

    2014-01-01

    Background Red blood cell transfusion is the principal therapy in patients with severe thalassaemias and haemoglobinopathies, which are prevalent in Thailand. Serological red blood cell typing is confounded by chronic transfusion, because of circulating donor red blood cells. We evaluated the concordance of serological phenotypes between a routine and a reference laboratory and with red cell genotyping. Materials and methods Ten consecutive Thai patients with β-thalassemia major who received regular transfusions were enrolled in Thailand. Phenotypes were tested serologically at Songklanagarind Hospital and at the National Institutes of Health. Red blood cell genotyping was performed with commercially available kits and a platform. Results In only three patients was the red cell genotyping concordant with the serological phenotypes for five antithetical antigen pairs in four blood group systems at the two institutions. At the National Institutes of Health, 32 of the 100 serological tests yielded invalid or discrepant results. The positive predictive value of serology did not reach 1 for any blood group system at either of the two institutions in this set of ten patients. Discussion Within this small study, numerous discrepancies were observed between serological phenotypes at the two institutes; red cell genotyping enabled determination of the blood group when serology failed due to transfused red blood cells. We question the utility of serological tests in regularly transfused paediatric patients and propose relying solely on red cell genotyping, which requires training for laboratory personnel and physicians. Red cell genotyping outperformed red cell serology by an order of magnitude in regularly transfused patients. PMID:24120606