Dimensional Regularization is Generic
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
The absence of the quadratic divergence in the Higgs sector of the Standard Model in the dimensional regularization is usually regarded to be an exceptional property of a specific regularization. To understand what is going on in the dimensional regularization, we illustrate how to reproduce the results of the dimensional regularization for the λϕ4 theory in the more conventional regularization such as the higher derivative regularization; the basic postulate involved is that the quadratically divergent induced mass, which is independent of the scale change of the physical mass, is kinematical and unphysical. This is consistent with the derivation of the Callan-Symanzik equation, which is a comparison of two theories with slightly different masses, for the λϕ4 theory without encountering the quadratic divergence. In this sense the dimensional regularization may be said to be generic in a bottom-up approach starting with a successful low energy theory. We also define a modified version of the mass independent renormalization for a scalar field which leads to the homogeneous renormalization group equation. Implications of the present analysis on the Standard Model at high energies and the presence or absence of SUSY at LHC energies are briey discussed.
Bronnikov, K A; Fabris, J C
2006-06-30
We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.
Regularized Structural Equation Modeling.
Jacobucci, Ross; Grimm, Kevin J; McArdle, John J
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2017-01-27
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Synchronization of Regular Automata
NASA Astrophysics Data System (ADS)
Caucal, Didier
Functional graph grammars are finite devices which generate the class of regular automata. We recall the notion of synchronization by grammars, and for any given grammar we consider the class of languages recognized by automata generated by all its synchronized grammars. The synchronization is an automaton-related notion: all grammars generating the same automaton synchronize the same languages. When the synchronizing automaton is unambiguous, the class of its synchronized languages forms an effective boolean algebra lying between the classes of regular languages and unambiguous context-free languages. We additionally provide sufficient conditions for such classes to be closed under concatenation and its iteration.
Geometry of spinor regularization
NASA Technical Reports Server (NTRS)
Hestenes, D.; Lounesto, P.
1983-01-01
The Kustaanheimo theory of spinor regularization is given a new formulation in terms of geometric algebra. The Kustaanheimo-Stiefel matrix and its subsidiary condition are put in a spinor form directly related to the geometry of the orbit in physical space. A physically significant alternative to the KS subsidiary condition is discussed. Derivations are carried out without using coordinates.
Illusory Liberalism in "Atlas de Geografía Humana"
ERIC Educational Resources Information Center
Ryan, Lorraine
2014-01-01
"Atlas de Geografía Humana" constitutes a critique of the much vaunted notion of a progressive Spain that has rectified the gender inequalities of the Francoist era, as one of the highly educated and successful protagonists, Fran, unwittingly adopts her mother's alignment with patriarchal norms. This novel elucidates the incompatibility…
Forghan, B. Takook, M.V.; Zarei, A.
2012-09-15
In this paper, the electron self-energy, photon self-energy and vertex functions are explicitly calculated in Krein space quantization including quantum metric fluctuation. The results are automatically regularized or finite. The magnetic anomaly and Lamb shift are also calculated in the one loop approximation in this method. Finally, the obtained results are compared to conventional QED results. - Highlights: Black-Right-Pointing-Pointer Krein regularization yields finite values for photon and electron self-energies and vertex function. Black-Right-Pointing-Pointer The magnetic anomaly is calculated and is exactly the same as the conventional result. Black-Right-Pointing-Pointer The Lamb shift is calculated and is approximately the same as in Hilbert space.
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
1973-10-01
The theory of strongly regular graphs was introduced by Bose r7 1 in 1963, in connection with partial geometries and 2 class association schemes. One...non adjacent vertices is constant and equal to ~. We shall denote by ~(p) (reap.r(p)) the set of vertices adjacent (resp.non adjacent) to a vertex p...is the complement of .2’ if the set of vertices of ~ is the set of vertices of .2’ and if two vertices in .2’ are adjacent if and only if they were
Regularized versus non-regularized statistical reconstruction techniques
NASA Astrophysics Data System (ADS)
Denisova, N. V.
2011-08-01
An important feature of positron emission tomography (PET) and single photon emission computer tomography (SPECT) is the stochastic property of real clinical data. Statistical algorithms such as ordered subset-expectation maximization (OSEM) and maximum a posteriori (MAP) are a direct consequence of the stochastic nature of the data. The principal difference between these two algorithms is that OSEM is a non-regularized approach, while the MAP is a regularized algorithm. From the theoretical point of view, reconstruction problems belong to the class of ill-posed problems and should be considered using regularization. Regularization introduces an additional unknown regularization parameter into the reconstruction procedure as compared with non-regularized algorithms. However, a comparison of non-regularized OSEM and regularized MAP algorithms with fixed regularization parameters has shown very minor difference between reconstructions. This problem is analyzed in the present paper. To improve the reconstruction quality, a method of local regularization is proposed based on the spatially adaptive regularization parameter. The MAP algorithm with local regularization was tested in reconstruction of the Hoffman brain phantom.
Flexible sparse regularization
NASA Astrophysics Data System (ADS)
Lorenz, Dirk A.; Resmerita, Elena
2017-01-01
The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.
Mainstreaming the Regular Classroom Student.
ERIC Educational Resources Information Center
Kahn, Michael
The paper presents activities, suggested by regular classroom teachers, to help prepare the regular classroom student for mainstreaming. The author points out that regular classroom children need a vehicle in which curiosity, concern, interest, fear, attitudes and feelings can be fully explored, where prejudices can be dispelled, and where the…
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
On regular rotating black holes
NASA Astrophysics Data System (ADS)
Torres, R.; Fayos, F.
2017-01-01
Different proposals for regular rotating black hole spacetimes have appeared recently in the literature. However, a rigorous analysis and proof of the regularity of this kind of spacetimes is still lacking. In this note we analyze rotating Kerr-like black hole spacetimes and find the necessary and sufficient conditions for the regularity of all their second order scalar invariants polynomial in the Riemann tensor. We also show that the regularity is linked to a violation of the weak energy conditions around the core of the rotating black hole.
Linear regularity and [phi]-regularity of nonconvex sets
NASA Astrophysics Data System (ADS)
Ng, Kung Fu; Zang, Rui
2007-04-01
In this paper, we discuss some sufficient conditions for the linear regularity and bounded linear regularity (and their variations) of finitely many closed (not necessarily convex) sets in a normed vector space. The accompanying necessary conditions are also given in the setting of Asplund spaces.
Regularly timed events amid chaos
NASA Astrophysics Data System (ADS)
Blakely, Jonathan N.; Cooper, Roy M.; Corron, Ned J.
2015-11-01
We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events.
Trajectory optimization using regularized variables
NASA Technical Reports Server (NTRS)
Lewallen, J. M.; Szebehely, V.; Tapley, B. D.
1969-01-01
Regularized equations for a particular optimal trajectory are compared with unregularized equations with respect to computational characteristics, using perturbation type numerical optimization. In the case of the three dimensional, low thrust, Earth-Jupiter rendezvous, the regularized equations yield a significant reduction in computer time.
Rotating regular black hole solution
NASA Astrophysics Data System (ADS)
Abdujabbarov, Ahmadjon
2016-07-01
Based on the Newman-Janis algorithm, the Ayón-Beato-García spacetime metric [Phys. Rev. Lett. 80, 5056 (1998)] of the regular spherically symmetric, static, and charged black hole has been converted into rotational form. It is shown that the derived solution for rotating a regular black hole is regular and the critical value of the electric charge for which two horizons merge into one sufficiently decreases in the presence of the nonvanishing rotation parameter a of the black hole.
NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION
CHARTRAND, RICK
2007-01-16
The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Geometric continuum regularization of quantum field theory
Halpern, M.B. . Dept. of Physics)
1989-11-08
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs.
Dimensional regularization in configuration space
Bollini, C.G. |; Giambiagi, J.J.
1996-05-01
Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Giftedness in the Regular Classroom.
ERIC Educational Resources Information Center
Green, Anne
This paper presents a rationale for serving gifted students in the regular classroom and offers guidelines for recognizing students who are gifted in the seven types of intelligence proposed by Howard Gardner. Stressed is the importance of creating in the classroom a community of learners that allows all children to actively explore ideas and…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... A. Approval of Minutes December 9, 2010. B. New Business Review of Insurance Premium Rates. FCSIC... From the Federal Register Online via the Government Publishing Office FARM CREDIT SYSTEM INSURANCE CORPORATION Farm Credit System Insurance Corporation Board Regular Meeting SUMMARY: Notice is hereby given...
Regularization of Localized Degradation Processes
1996-12-28
order to assess the regularization properties of non-classical micropolar Cosserat continua which feature non-symmetric stress and strain tensors because...of the presence of couple-stresses and micro-curvatures. It was shown that micropolar media may only exhibit localized failure in the form of tensile
Resource Guide for Regular Teachers.
ERIC Educational Resources Information Center
Kampert, George J.
The resource guide for regular teachers provides policies and procedures of the Flour Bluff (Texas) school district regarding special education of handicapped students. Individual sections provide guidelines for the following areas: the referral process; individual assessment; participation on student evaluation and placement committee; special…
Temporal regularity in speech perception: Is regularity beneficial or deleterious?
Geiser, Eveline; Shattuck-Hufnagel, Stefanie
2012-04-01
Speech rhythm has been proposed to be of crucial importance for correct speech perception and language learning. This study investigated the influence of speech rhythm in second language processing. German pseudo-sentences were presented to participants in two conditions: 'naturally regular speech rhythm' and an 'emphasized regular rhythm'. Nine expert English speakers with 3.5±1.6 years of German training repeated each sentence after hearing it once over headphones. Responses were transcribed using the International Phonetic Alphabet and analyzed for the number of correct, false and missing consonants as well as for consonant additions. The over-all number of correct reproductions of consonants did not differ between the two experimental conditions. However, speech rhythmicization significantly affected the serial position curve of correctly reproduced syllables. The results of this pilot study are consistent with the view that speech rhythm is important for speech perception.
On different facets of regularization theory.
Chen, Zhe; Haykin, Simon
2002-12-01
This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the spatial (Fourier) domain. State-of-the-art research relevant to the regularization theory is reviewed, covering Occam's razor, minimum length description, Bayesian theory, pruning algorithms, informational (entropy) theory, statistical learning theory, and equivalent regularization. The universal principle of regularization in terms of Kolmogorov complexity is discussed. Finally, some prospective studies on regularization theory and beyond are suggested.
Physical model of dimensional regularization
NASA Astrophysics Data System (ADS)
Schonfeld, Jonathan F.
2016-12-01
We explicitly construct fractals of dimension 4{-}ɛ on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity.
Regular Motions of Resonant Asteroids
NASA Astrophysics Data System (ADS)
Ferraz-Mello, S.
1990-11-01
RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Knowledge and regularity in planning
NASA Technical Reports Server (NTRS)
Allen, John A.; Langley, Pat; Matwin, Stan
1992-01-01
The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge.
22 CFR 120.39 - Regular employee.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...
22 CFR 120.39 - Regular employee.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...
22 CFR 120.39 - Regular employee.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An...
Regular Pentagons and the Fibonacci Sequence.
ERIC Educational Resources Information Center
French, Doug
1989-01-01
Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)
Regularized degenerate multi-solitons
NASA Astrophysics Data System (ADS)
Correa, Francisco; Fring, Andreas
2016-09-01
We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.
Natural frequency of regular basins
NASA Astrophysics Data System (ADS)
Tjandra, Sugih S.; Pudjaprasetya, S. R.
2014-03-01
Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-18
... of Humana Insurance Company, a Division of CareNetwork, Inc., Front End Operations and Account Installation- Product Testing Groups, Green Bay, Wisconsin, to apply for Trade Adjustment Assistance (TAA). On... De Pere, and not Green Bay, Wisconsin. Accordingly, the subject workers are workers at...
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Total variation regularization with bounded linear variations
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2016-09-01
One of the most known techniques for signal denoising is based on total variation regularization (TV regularization). A better understanding of TV regularization is necessary to provide a stronger mathematical justification for using TV minimization in signal processing. In this work, we deal with an intermediate case between one- and two-dimensional cases; that is, a discrete function to be processed is two-dimensional radially symmetric piecewise constant. For this case, the exact solution to the problem can be obtained as follows: first, calculate the average values over rings of the noisy function; second, calculate the shift values and their directions using closed formulae depending on a regularization parameter and structure of rings. Despite the TV regularization is effective for noise removal; it often destroys fine details and thin structures of images. In order to overcome this drawback, we use the TV regularization for signal denoising subject to linear signal variations are bounded.
Testing times: regularities in the historical sciences.
Jeffares, Ben
2008-12-01
The historical sciences, such as geology, evolutionary biology, and archaeology, appear to have no means to test hypotheses. However, on closer examination, reasoning in the historical sciences relies upon regularities, regularities that can be tested. I outline the role of regularities in the historical sciences, and in the process, blur the distinction between the historical sciences and the experimental sciences: all sciences deploy theories about the world in their investigations.
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 16 intermediate adults (40–55), and 25 older adults (65–80). The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities). We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding), short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young), but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical implications of regularity
Continuum regularization of quantum field theory
Bern, Z.
1986-04-01
Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.
Numerical Regularization of Ill-Posed Problems.
1980-07-09
Unione Matematica Italiana. 4. The parameter choice problem in linear regularization: a mathematical introduction, in "Ill-Posed Problems: Theory and...vector b which is generally unavailable (see [21], [22]). Kdckler [33] has shon however that in the case of Tikhonov regularization for matrices it may
Transport Code for Regular Triangular Geometry
1993-06-09
DIAMANT2 solves the two-dimensional static multigroup neutron transport equation in planar regular triangular geometry. Both regular and adjoint, inhomogeneous and homogeneous problems subject to vacuum, reflective or input specified boundary flux conditions are solved. Anisotropy is allowed for the scattering source. Volume and surface sources are allowed for inhomogeneous problems.
Regular Decompositions for H(div) Spaces
Kolev, Tzanio; Vassilevski, Panayot
2012-01-01
We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Regular membership. 725.3 Section 725.3 Banks... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit... stock subscription;1 and 1 A credit union which submits its application for membership prior to...
12 CFR 725.3 - Regular membership.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Regular membership. 725.3 Section 725.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person...
Regularization techniques in realistic Laplacian computation.
Bortel, Radoslav; Sovka, Pavel
2007-11-01
This paper explores regularization options for the ill-posed spline coefficient equations in the realistic Laplacian computation. We investigate the use of the Tikhonov regularization, truncated singular value decomposition, and the so-called lambda-correction with the regularization parameter chosen by the L-curve, generalized cross-validation, quasi-optimality, and the discrepancy principle criteria. The provided range of regularization techniques is much wider than in the previous works. The improvement of the realistic Laplacian is investigated by simulations on the three-shell spherical head model. The conclusion is that the best performance is provided by the combination of the Tikhonov regularization and the generalized cross-validation criterion-a combination that has never been suggested for this task before.
A linear functional strategy for regularized ranking.
Kriukova, Galyna; Panasiuk, Oleksandra; Pereverzyev, Sergei V; Tkachenko, Pavlo
2016-01-01
Regularization schemes are frequently used for performing ranking tasks. This topic has been intensively studied in recent years. However, to be effective a regularization scheme should be equipped with a suitable strategy for choosing a regularization parameter. In the present study we discuss an approach, which is based on the idea of a linear combination of regularized rankers corresponding to different values of the regularization parameter. The coefficients of the linear combination are estimated by means of the so-called linear functional strategy. We provide a theoretical justification of the proposed approach and illustrate them by numerical experiments. Some of them are related with ranking the risk of nocturnal hypoglycemia of diabetes patients.
Quantitative regularities in floodplain formation
NASA Astrophysics Data System (ADS)
Nevidimova, O.
2009-04-01
Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface
Functional MRI using regularized parallel imaging acquisition.
Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M; Belliveau, John W; Wald, Lawrence L; Kwong, Kenneth K
2005-08-01
Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions.
Functional MRI Using Regularized Parallel Imaging Acquisition
Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.
2013-01-01
Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694
Completeness and regularity of generalized fuzzy graphs.
Samanta, Sovan; Sarkar, Biswajit; Shin, Dongmin; Pal, Madhumangal
2016-01-01
Fuzzy graphs are the backbone of many real systems like networks, image, scheduling, etc. But, due to some restriction on edges, fuzzy graphs are limited to represent for some systems. Generalized fuzzy graphs are appropriate to avoid such restrictions. In this study generalized fuzzy graphs are introduced. In this study, matrix representation of generalized fuzzy graphs is described. Completeness and regularity are two important parameters of graph theory. Here, regular and complete generalized fuzzy graphs are introduced. Some properties of them are discussed. After that, effective regular graphs are exemplified.
Partitioning of regular computation on multiprocessor systems
NASA Technical Reports Server (NTRS)
Lee, Fung Fung
1988-01-01
Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.
Partitioning of regular computation on multiprocessor systems
NASA Technical Reports Server (NTRS)
Lee, Fung F.
1990-01-01
Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.
Partitioning of regular computation on multiprocessor systems
Lee, F. . Computer Systems Lab.)
1990-07-01
Problem partitioning of regular computation over two-dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.
[Serum ferritin in donors with regular plateletpheresis].
Ma, Chun-Hui; Guo, Ru-Hua; Wu, Wei-Jian; Yan, Jun-Xiong; Yu, Jin-Lin; Zhu, Ye-Hua; He, Qi-Tong; Luo, Yi-Hong; Huang, Lu; Ye, Rui-Yun
2011-04-01
This study was aimed to evaluate the impact of regular donating platelets on serum ferritin (SF) of donors. A total of 93 male blood donors including 24 initial plateletpheresis donors and 69 regular plateletpheresis donors were selected randomly. Their SF level was measured by ELISA. The results showed that the SF level of initial plateletpheresis donors and regular plateletpheresis donors were 91.08 ± 23.38 µg/L and 57.16 ± 35.48 µg/L respectively, and all were in normal levels, but there was significant difference between the 2 groups (p < 0.05). The SF level decreased when the donation frequency increased, there were no significant differences between the groups with different donation frequency. Correlation with lifetime donations of platelets was not found. It is concluded that regular plateletpheresis donors may have lower SF level.
Epigenetic adaptation to regular exercise in humans.
Ling, Charlotte; Rönn, Tina
2014-07-01
Regular exercise has numerous health benefits, for example, it reduces the risk of cardiovascular disease and cancer. It has also been shown that the risk of type 2 diabetes can be halved in high-risk groups through nonpharmacological lifestyle interventions involving exercise and diet. Nevertheless, the number of people living a sedentary life is dramatically increasing worldwide. Researchers have searched for molecular mechanisms explaining the health benefits of regular exercise for decades and it is well established that exercise alters the gene expression pattern in multiple tissues. However, until recently it was unknown that regular exercise can modify the genome-wide DNA methylation pattern in humans. This review will focus on recent progress in the field of regular exercise and epigenetics.
The Volume of the Regular Octahedron
ERIC Educational Resources Information Center
Trigg, Charles W.
1974-01-01
Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)
Regularization of B-Spline Objects.
Xu, Guoliang; Bajaj, Chandrajit
2011-01-01
By a d-dimensional B-spline object (denoted as ), we mean a B-spline curve (d = 1), a B-spline surface (d = 2) or a B-spline volume (d = 3). By regularization of a B-spline object we mean the process of relocating the control points of such that they approximate an isometric map of its definition domain in certain directions and is shape preserving. In this paper we develop an efficient regularization method for , d = 1, 2, 3 based on solving weak form L(2)-gradient flows constructed from the minimization of certain regularizing energy functionals. These flows are integrated via the finite element method using B-spline basis functions. Our experimental results demonstrate that our new regularization method is very effective.
Wavelet Characterizations of Multi-Directional Regularity
NASA Astrophysics Data System (ADS)
Slimane, Mourad Ben
2011-05-01
The study of d dimensional traces of functions of m several variables leads to directional behaviors. The purpose of this paper is two-fold. Firstly, we extend the notion of one direction pointwise Hölder regularity introduced by Jaffard to multi-directions. Secondly, we characterize multi-directional pointwise regularity by Triebel anisotropic wavelet coefficients (resp. leaders), and also by Calderón anisotropic continuous wavelet transform.
Probabilistic regularization in inverse optical imaging.
De Micheli, E; Viano, G A
2000-11-01
The problem of object restoration in the case of spatially incoherent illumination is considered. A regularized solution to the inverse problem is obtained through a probabilistic approach, and a numerical algorithm based on the statistical analysis of the noisy data is presented. Particular emphasis is placed on the question of the positivity constraint, which is incorporated into the probabilistically regularized solution by means of a quadratic programming technique. Numerical examples illustrating the main steps of the algorithm are also given.
Usual Source of Care in Preventive Service Use: A Regular Doctor versus a Regular Site
Xu, K Tom
2002-01-01
Objective To compare the effects of having a regular doctor and having a regular site on five preventive services, controlling for the endogeneity of having a usual source of care. Data Source The Medical Expenditure Panel Survey 1996 conducted by the Agency for Healthcare Research and Quality and the National Center for Health Statistics. Study Design Mammograms, pap smears, blood pressure checkups, cholesterol level checkups, and flu shots were examined. A modified behavioral model framework was presented, which controlled for the endogeneity of having a usual source of care. Based on this framework, a two-equation empirical model was established to predict the probabilities of having a regular doctor and having a regular site, and use of each type of preventive service. Principal Findings Having a regular doctor was found to have a greater impact than having a regular site on discretional preventive services, such as blood pressure and cholesterol level checkups. No statistically significant differences were found between the effects a having a regular doctor and having a regular site on the use of flu shots, pap smears, and mammograms. Among the five preventive services, having a usual source of care had the greatest impact on cholesterol level checkups and pap smears. Conclusions Promoting a stable physician–patient relationship can improve patients’ timely receipt of clinical prevention. For certain preventive services, having a regular doctor is more effective than having a regular site. PMID:12546284
Perturbations in a regular bouncing universe
Battefeld, T.J.; Geshnizjani, G.
2006-03-15
We consider a simple toy model of a regular bouncing universe. The bounce is caused by an extra timelike dimension, which leads to a sign flip of the {rho}{sup 2} term in the effective four dimensional Randall Sundrum-like description. We find a wide class of possible bounces: big bang avoiding ones for regular matter content, and big rip avoiding ones for phantom matter. Focusing on radiation as the matter content, we discuss the evolution of scalar, vector and tensor perturbations. We compute a spectral index of n{sub s}=-1 for scalar perturbations and a deep blue index for tensor perturbations after invoking vacuum initial conditions, ruling out such a model as a realistic one. We also find that the spectrum (evaluated at Hubble crossing) is sensitive to the bounce. We conclude that it is challenging, but not impossible, for cyclic/ekpyrotic models to succeed, if one can find a regularized version.
Shadow of rotating regular black holes
NASA Astrophysics Data System (ADS)
Abdujabbarov, Ahmadjon; Amir, Muhammed; Ahmedov, Bobomurat; Ghosh, Sushant G.
2016-05-01
We study the shadows cast by the different types of rotating regular black holes viz. Ayón-Beato-García (ABG), Hayward, and Bardeen. These black holes have in addition to the total mass (M ) and rotation parameter (a ), different parameters as electric charge (Q ), deviation parameter (g ), and magnetic charge (g*). Interestingly, the size of the shadow is affected by these parameters in addition to the rotation parameter. We found that the radius of the shadow in each case decreases monotonically, and the distortion parameter increases when the values of these parameters increase. A comparison with the standard Kerr case is also investigated. We have also studied the influence of the plasma environment around regular black holes to discuss its shadow. The presence of the plasma affects the apparent size of the regular black hole's shadow to be increased due to two effects: (i) gravitational redshift of the photons and (ii) radial dependence of plasma density.
Nonlinear electrodynamics and regular black holes
NASA Astrophysics Data System (ADS)
Sajadi, S. N.; Riazi, N.
2017-03-01
In this work, an exact regular black hole solution in General Relativity is presented. The source is a nonlinear electromagnetic field with the algebraic structure T00=T11 for the energy-momentum tensor, partially satisfying the weak energy condition but not the strong energy condition. In the weak field limit, the EM field behaves like the Maxwell field. The solution corresponds to a charged black hole with q≤0.77 m. The metric, the curvature invariants, and the electric field are regular everywhere. The BH is stable against small perturbations of spacetime and using the Weinhold metric, geometrothermodynamical stability has been investigated. Finally we investigate the idea that the observable universe lives inside a regular black hole. We argue that this picture might provide a viable description of universe.
Regular homotopy for immersions of graphs into surfaces
NASA Astrophysics Data System (ADS)
Permyakov, D. A.
2016-06-01
We study invariants of regular immersions of graphs into surfaces up to regular homotopy. The concept of the winding number is used to introduce a new simple combinatorial invariant of regular homotopy. Bibliography: 20 titles.
Regular transport dynamics produce chaotic travel times
NASA Astrophysics Data System (ADS)
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Generalised hyperbolicity in spacetimes with Lipschitz regularity
NASA Astrophysics Data System (ADS)
Sanchez Sanchez, Yafet; Vickers, James A.
2017-02-01
In this paper we obtain general conditions under which the wave equation is well-posed in spacetimes with metrics of Lipschitz regularity. In particular, the results can be applied to spacetimes where there is a loss of regularity on a hypersurface such as shell-crossing singularities, thin shells of matter, and surface layers. This provides a framework for regarding gravitational singularities not as obstructions to the world lines of point-particles, but rather as obstruction to the dynamics of test fields.
Demosaicing as the problem of regularization
NASA Astrophysics Data System (ADS)
Kunina, Irina; Volkov, Aleksey; Gladilin, Sergey; Nikolaev, Dmitry
2015-12-01
Demosaicing is the process of reconstruction of a full-color image from Bayer mosaic, which is used in digital cameras for image formation. This problem is usually considered as an interpolation problem. In this paper, we propose to consider the demosaicing problem as a problem of solving an underdetermined system of algebraic equations using regularization methods. We consider regularization with standard l1/2-, l1 -, l2- norms and their effect on quality image reconstruction. The experimental results showed that the proposed technique can both be used in existing methods and become the base for new ones
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regular Gleason Measures and Generalized Effect Algebras
NASA Astrophysics Data System (ADS)
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularizing cosmological singularities by varying physical constants
Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl
2013-02-01
Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.
TAUBERIAN THEOREMS FOR MATRIX REGULAR VARIATION
MEERSCHAERT, M. M.; SCHEFFLER, H.-P.
2013-01-01
Karamata’s Tauberian theorem relates the asymptotics of a nondecreasing right-continuous function to that of its Laplace-Stieltjes transform, using regular variation. This paper establishes the analogous Tauberian theorem for matrix-valued functions. Some applications to time series analysis are indicated. PMID:24644367
Regular Nonchaotic Attractors with Positive Plural
NASA Astrophysics Data System (ADS)
Zhang, Xu
2016-12-01
The study of the strange nonchaotic attractors is an interesting topic, where the dynamics are neither regular nor chaotic (the word chaotic means the positive Lyapunov exponents), and the shape of the attractors has complicated geometry structure, or fractal structure. It is found that in a class of planar first-order nonautonomous systems, it is possible that there exist attractors, where the shape of the attractors is regular, the orbits are transitive on the attractors, and the dynamics are not chaotic. We call this type of attractors as regular nonchaotic attractors with positive plural, which are different from the strange nonchaotic attractors, attracting fixed points, or attracting periodic orbits. Several examples with computer simulations are given. The first two examples have annulus-shaped attractors. Another two examples have disk-shaped attractors. The last two examples with externally driven terms at two incommensurate frequencies have regular nonchaotic attractors with positive plural, implying that the existence of externally driven terms at two incommensurate frequencies might not be the sufficient condition to guarantee that the system has strange nonchaotic attractors.
Generalisation of Regular and Irregular Morphological Patterns.
ERIC Educational Resources Information Center
Prasada, Sandeep; and Pinker, Steven
1993-01-01
When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call…
Fast Image Reconstruction with L2-Regularization
Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar
2014-01-01
Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184
Strategies of Teachers in the Regular Classroom
ERIC Educational Resources Information Center
De Leeuw, Renske Ria; De Boer, Anke Aaltje
2016-01-01
It is known that regular schoolteachers have difficulties in educating students with social, emotional and behavioral difficulties (SEBD), mainly because of their disruptive behavior. In order to manage the disruptive behavior of students with SEBD many advices and strategies are provided in educational literature. However, very little is known…
Regularities in Spearman's Law of Diminishing Returns.
ERIC Educational Resources Information Center
Jensen, Arthur R.
2003-01-01
Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…
On the regularity in some variational problems
NASA Astrophysics Data System (ADS)
Ragusa, Maria Alessandra; Tachikawa, Atsushi
2017-01-01
Our main goal is the study some regularity results where are considered estimates in Morrey spaces for the derivatives of local minimizers of variational integrals of the form 𝒜 (u ,Ω )= ∫Ω F (x ,u ,D u ) dx where Ω is a bounded domain in ℝm and the integrand F have some different forms.
Prox-regular functions in Hilbert spaces
NASA Astrophysics Data System (ADS)
Bernard, Frédéric; Thibault, Lionel
2005-03-01
This paper studies the prox-regularity concept for functions in the general context of Hilbert space. In particular, a subdifferential characterization is established as well as several other properties. It is also shown that the Moreau envelopes of such functions are continuously differentiable.
Semantic Gender Assignment Regularities in German
ERIC Educational Resources Information Center
Schwichtenberg, Beate; Schiller, Niels O.
2004-01-01
Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution…
Starting flow in regular polygonal ducts
NASA Astrophysics Data System (ADS)
Wang, C. Y.
2016-06-01
The starting flows in regular polygonal ducts of S = 3, 4, 5, 6, 8 sides are determined by the method of eigenfunction superposition. The necessary S-fold symmetric eigenfunctions and eigenvalues of the Helmholtz equation are found either exactly or by boundary point match. The results show the starting time is governed by the first eigenvalue.
Regularity Aspects in Inverse Musculoskeletal Biomechanics
NASA Astrophysics Data System (ADS)
Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten
2008-09-01
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.
Regularization of turbulence - a comprehensive modeling approach
NASA Astrophysics Data System (ADS)
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Regularity of rotational travelling water waves.
Escher, Joachim
2012-04-13
Several recent results on the regularity of streamlines beneath a rotational travelling wave, along with the wave profile itself, will be discussed. The survey includes the classical water wave problem in both finite and infinite depth, capillary waves and solitary waves as well. A common assumption in all models to be discussed is the absence of stagnation points.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
42 CFR 61.3 - Purpose of regular fellowships.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Purpose of regular fellowships. 61.3 Section 61.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships...
Charged fermions tunneling from regular black holes
Sharif, M. Javed, W.
2012-11-15
We study Hawking radiation of charged fermions as a tunneling process from charged regular black holes, i.e., the Bardeen and ABGB black holes. For this purpose, we apply the semiclassical WKB approximation to the general covariant Dirac equation for charged particles and evaluate the tunneling probabilities. We recover the Hawking temperature corresponding to these charged regular black holes. Further, we consider the back-reaction effects of the emitted spin particles from black holes and calculate their corresponding quantum corrections to the radiation spectrum. We find that this radiation spectrum is not purely thermal due to the energy and charge conservation but has some corrections. In the absence of charge, e = 0, our results are consistent with those already present in the literature.
Superfast Tikhonov Regularization of Toeplitz Systems
NASA Astrophysics Data System (ADS)
Turnes, Christopher K.; Balcan, Doru; Romberg, Justin
2014-08-01
Toeplitz-structured linear systems arise often in practical engineering problems. Correspondingly, a number of algorithms have been developed that exploit Toeplitz structure to gain computational efficiency when solving these systems. The earliest "fast" algorithms for Toeplitz systems required O(n^2) operations, while more recent "superfast" algorithms reduce the cost to O(n (log n)^2) or below. In this work, we present a superfast algorithm for Tikhonov regularization of Toeplitz systems. Using an "extension-and-transformation" technique, our algorithm translates a Tikhonov-regularized Toeplitz system into a type of specialized polynomial problem known as tangential interpolation. Under this formulation, we can compute the solution in only O(n (log n)^2) operations. We use numerical simulations to demonstrate our algorithm's complexity and verify that it returns stable solutions.
Modeling Regular Replacement for String Constraint Solving
NASA Technical Reports Server (NTRS)
Fu, Xiang; Li, Chung-Chih
2010-01-01
Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications
Tracking magnetogram proper motions by multiscale regularization
NASA Technical Reports Server (NTRS)
Jones, Harrison P.
1995-01-01
Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.
3D Gravity Inversion using Tikhonov Regularization
NASA Astrophysics Data System (ADS)
Toushmalani, Reza; Saibi, Hakim
2015-08-01
Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.
Chaos regularization of quantum tunneling rates.
Pecora, Louis M; Lee, Hoshik; Wu, Dong-Ho; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-06-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Regularity of nuclear structure under random interactions
Zhao, Y. M.
2011-05-06
In this contribution I present a brief introduction to simplicity out of complexity in nuclear structure, specifically, the regularity of nuclear structure under random interactions. I exemplify such simplicity by two examples: spin-zero ground state dominance and positive parity ground state dominance in even-even nuclei. Then I discuss two recent results of nuclear structure in the presence of random interactions, in collaboration with Prof. Arima. Firstly I discuss sd bosons under random interactions, with the focus on excited states in the yrast band. We find a few regular patterns in these excited levels. Secondly I discuss our recent efforts towards obtaining eigenvalues without diagonalizing the full matrices of the nuclear shell model Hamiltonian.
Regular black holes with flux tube core
Zaslavskii, Oleg B.
2009-09-15
We consider a class of black holes for which the area of the two-dimensional spatial cross section has a minimum on the horizon with respect to a quasiglobal (Krusckal-like) coordinate. If the horizon is regular, one can generate a tubelike counterpart of such a metric and smoothly glue it to a black hole region. The resulting composite space-time is globally regular, so all potential singularities under the horizon of the original metrics are removed. Such a space-time represents a black hole without an apparent horizon. It is essential that the matter should be nonvacuum in the outer region but vacuumlike in the inner one. As an example we consider the noninteracting mixture of vacuum fluid and matter with a linear equation of state and scalar phantom fields. This approach is extended to distorted metrics, with the requirement of spherical symmetry relaxed.
Power-law regularities in human language
NASA Astrophysics Data System (ADS)
Mehri, Ali; Lashkari, Sahar Mohammadpour
2016-11-01
Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.
Symmetries and regular behavior of Hamiltonian systems.
Kozlov, Valeriy V.
1996-03-01
The behavior of the phase trajectories of the Hamilton equations is commonly classified as regular and chaotic. Regularity is usually related to the condition for complete integrability, i.e., a Hamiltonian system with n degrees of freedom has n independent integrals in involution. If at the same time the simultaneous integral manifolds are compact, the solutions of the Hamilton equations are quasiperiodic. In particular, the entropy of the Hamiltonian phase flow of a completely integrable system is zero. It is found that there is a broader class of Hamiltonian systems that do not show signs of chaotic behavior. These are systems that allow n commuting "Lagrangian" vector fields, i.e., the symplectic 2-form on each pair of such fields is zero. They include, in particular, Hamiltonian systems with multivalued integrals. (c) 1996 American Institute of Physics.
A regularization approach to hydrofacies delineation
Wohlberg, Brendt; Tartakovsky, Daniel
2009-01-01
We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.
Speech enhancement using local spectral regularization
NASA Astrophysics Data System (ADS)
Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly; Diaz, Arnoldo
2016-09-01
A locally-adaptive algorithm for speech enhancement based on local spectral regularization is presented. The algorithm is able to retrieve a clean speech signal from a noisy signal using locally-adaptive signal processing. The proposed algorithm is able to increase the quality of a noisy signal in terms of objective metrics. Computer simulation results obtained with the proposed algorithm are presented and discussed in processing speech signals corrupted with additive noise.
Regular Language Constrained Sequence Alignment Revisited
NASA Astrophysics Data System (ADS)
Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal
Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, Arslan [1] introduced the Regular Language Constrained Sequence Alignment Problem and proposed an O(n 2 t 4) time and O(n 2 t 2) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the non-deterministic automaton, which is given as input. Chung et al. [2] proposed a faster O(n 2 t 3) time algorithm for the same problem. In this paper, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n 2 t 3/logt). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the run time complexity in the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense.
Hyperspectral Image Recovery via Hybrid Regularization
NASA Astrophysics Data System (ADS)
Arablouei, Reza; de Hoog, Frank
2016-12-01
Natural images tend to mostly consist of smooth regions with individual pixels having highly correlated spectra. This information can be exploited to recover hyperspectral images of natural scenes from their incomplete and noisy measurements. To perform the recovery while taking full advantage of the prior knowledge, we formulate a composite cost function containing a square-error data-fitting term and two distinct regularization terms pertaining to spatial and spectral domains. The regularization for the spatial domain is the sum of total-variation of the image frames corresponding to all spectral bands. The regularization for the spectral domain is the l1-norm of the coefficient matrix obtained by applying a suitable sparsifying transform to the spectra of the pixels. We use an accelerated proximal-subgradient method to minimize the formulated cost function. We analyze the performance of the proposed algorithm and prove its convergence. Numerical simulations using real hyperspectral images exhibit that the proposed algorithm offers an excellent recovery performance with a number of measurements that is only a small fraction of the hyperspectral image data size. Simulation results also show that the proposed algorithm significantly outperforms an accelerated proximal-gradient algorithm that solves the classical basis-pursuit denoising problem to recover the hyperspectral image.
Hyperspectral Image Recovery via Hybrid Regularization.
Arablouei, Reza; de Hoog, Frank
2016-09-27
Natural images tend to mostly consist of smooth regions with individual pixels having highly correlated spectra. This information can be exploited to recover hyperspectral images of natural scenes from their incomplete and noisy measurements. To perform the recovery while taking full advantage of the prior knowledge, we formulate a composite cost function containing a square-error data-fitting term and two distinct regularization terms pertaining to spatial and spectral domains. The regularization for the spatial domain is the sum of total-variation of the image frames corresponding to all spectral bands. The regularization for the spectral domain is the ��������-norm of the coefficient matrix obtained by applying a suitable sparsifying transform to the spectra of the pixels. We use an accelerated proximal-subgradient method to minimize the formulated cost function. We analyse the performance of the proposed algorithm and prove its convergence. Numerical simulations using real hyperspectral images exhibit that the proposed algorithm offers an excellent recovery performance with a number of measurements that is only a small fraction of the hyperspectral image data size. Simulation results also show that the proposed algorithm significantly outperforms an accelerated proximal-gradient algorithm that solves the classical basis-pursuit denoising problem to recover the hyperspectral image.
Guaranteed classification via regularized similarity learning.
Guo, Zheng-Chu; Ying, Yiming
2014-03-01
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban (2012). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.
Automatic detection of regularly repeating vocalizations
NASA Astrophysics Data System (ADS)
Mellinger, David
2005-09-01
Many animal species produce repetitive sounds at regular intervals. This regularity can be used for automatic recognition of the sounds, providing improved detection at a given signal-to-noise ratio. Here, the detection of sperm whale sounds is examined. Sperm whales produce highly repetitive ``regular clicks'' at periods of about 0.2-2 s, and faster click trains in certain behavioral contexts. The following detection procedure was tested: a spectrogram was computed; values within a certain frequency band were summed; time windowing was applied; each windowed segment was autocorrelated; and the maximum of the autocorrelation within a certain periodicity range was chosen. This procedure was tested on sets of recordings containing sperm whale sounds and interfering sounds, both low-frequency recordings from autonomous hydrophones and high-frequency ones from towed hydrophone arrays. An optimization procedure iteratively varies detection parameters (spectrogram frame length and frequency range, window length, periodicity range, etc.). Performance of various sets of parameters was measured by setting a standard level of allowable missed calls, and the resulting optimium parameters are described. Performance is also compared to that of a neural network trained using the data sets. The method is also demonstrated for sounds of blue whales, minke whales, and seismic airguns. [Funding from ONR.
Regularization Parameter Selections via Generalized Information Criterion
Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling
2009-01-01
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-13
..., and other individuals with knowledge of the health-insurance industry. The Department carefully... filed, Humana provided health insurance to approximately 35,000 Medicare Advantage enrollees in the relevant geographic markets, and Arcadian provided health insurance to over 14,700 Medicare...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF LABOR Employment and Training Administration Humana Insurance Company, a Division Of Carenetwork, Inc., Green Bay..., Inc., Green Bay, Wisconsin was based on the findings that the subject firm did not, during the...
Convergence and Fluctuations of Regularized Tyler Estimators
NASA Astrophysics Data System (ADS)
Kammoun, Abla; Couillet, Romain; Pascal, Ferderic; Alouini, Mohamed-Slim
2016-02-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter $\\rho$. While a high value of $\\rho$ is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations $n$ and/or their size $N$ increase together. First asymptotic results have recently been obtained under the assumption that $N$ and $n$ are large and commensurable. Interestingly, no results concerning the regime of $n$ going to infinity with $N$ fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult $N$ and $n$ large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when $n\\to\\infty$ with $N$ fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter $\\rho$.
Axial Presentations of Regular Arcs on Mn
Morse, Marston
1972-01-01
THEOREM 1. Let Mn be a Riemannian manifold of class Cm, m > 0. On Mn let g be a simple compact, sensed, regular arc whose local coordinates are functions of class Cm of the algebraic arc length s, measured along g from a prescribed point of g. There then exists a presentation (F: U, X) [unk] [unk]Mn such that g [unk] X, and each point p(s) of g is represented in the euclidean domain U by coordinates (x1,...,xn) = (s,0,...,0). PMID:16592036
Multichannel image regularization using anisotropic geodesic filtering
Grazzini, Jacopo A
2010-01-01
This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.
Regularization ambiguities in loop quantum gravity
Perez, Alejandro
2006-02-15
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
New Regularization Method for EXAFS Analysis
Reich, Tatiana Ye.; Reich, Tobias; Korshunov, Maxim E.; Antonova, Tatiana V.; Ageev, Alexander L.; Moll, Henry
2007-02-02
As an alternative to the analysis of EXAFS spectra by conventional shell fitting, the Tikhonov regularization method has been proposed. An improved algorithm that utilizes a priori information about the sample has been developed and applied to the analysis of U L3-edge spectra of soddyite, (UO2)2SiO4{center_dot}2H2O, and of U(VI) sorbed onto kaolinite. The partial radial distribution functions g1(UU), g2(USi), and g3(UO) of soddyite agree with crystallographic values and previous EXAFS results.
Total-variation regularization with bound constraints
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Regularization ambiguities in loop quantum gravity
NASA Astrophysics Data System (ADS)
Perez, Alejandro
2006-02-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem—the existence of well-behaved regularization of the constraints—is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant “point-splitting” regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions—due to the difficulties associated to the definition of the physical inner product—it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Accelerating Large Data Analysis By Exploiting Regularities
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Nonlinear regularization techniques for seismic tomography
Loris, I. Douma, H.; Nolet, G.; Regone, C.
2010-02-01
The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, l{sub 2} penalties are compared to so-called sparsity promoting l{sub 1} and l{sub 0} penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an l{sub 2} norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer l{sub 1} damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple l{sub 2} minimization ('Tikhonov regularization') which should be avoided. In some of our examples, the l{sub 0} method produced notable artifacts. In addition we show how nonlinear l{sub 1} methods for finding sparse models can be competitive in speed with the widely used l{sub 2} methods, certainly under noisy conditions, so that there is no need to shun l{sub 1} penalizations.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
ERIC Educational Resources Information Center
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
The Essential Special Education Guide for the Regular Education Teacher
ERIC Educational Resources Information Center
Burns, Edward
2007-01-01
The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…
Charge regularization in phase separating polyelectrolyte solutions.
Muthukumar, M; Hua, Jing; Kundagrami, Arindam
2010-02-28
Theoretical investigations of phase separation in polyelectrolyte solutions have so far assumed that the effective charge of the polyelectrolyte chains is fixed. The ability of the polyelectrolyte chains to self-regulate their effective charge due to the self-consistent coupling between ionization equilibrium and polymer conformations, depending on the dielectric constant, temperature, and polymer concentration, affects the critical phenomena and phase transitions drastically. By considering salt-free polyelectrolyte solutions, we show that the daughter phases have different polymer charges from that of the mother phase. The critical point is also altered significantly by the charge self-regularization of the polymer chains. This work extends the progress made so far in the theory of phase separation of strong polyelectrolyte solutions to a higher level of understanding by considering chains which can self-regulate their charge.
Multiloop integrals in dimensional regularization made simple.
Henn, Johannes M
2013-06-21
Scattering amplitudes at loop level can be expressed in terms of Feynman integrals. The latter satisfy partial differential equations in the kinematical variables. We argue that a good choice of basis for (multi)loop integrals can lead to significant simplifications of the differential equations, and propose criteria for finding an optimal basis. This builds on experience obtained in supersymmetric field theories that can be applied successfully to generic quantum field theory integrals. It involves studying leading singularities and explicit integral representations. When the differential equations are cast into canonical form, their solution becomes elementary. The class of functions involved is easily identified, and the solution can be written down to any desired order in ϵ within dimensional regularization. Results obtained in this way are particularly simple and compact. In this Letter, we outline the general ideas of the method and apply them to a two-loop example.
Regularization of Nutation Time Series at GSFC
NASA Astrophysics Data System (ADS)
Le Bail, K.; Gipson, J. M.; Bolotin, S.
2012-12-01
VLBI is unique in its ability to measure all five Earth orientation parameters. In this paper we focus on the two nutation parameters which characterize the orientation of the Earth's rotation axis in space. We look at the periodicities and the spectral characteristics of these parameters for both R1 and R4 sessions independently. The study of the most significant periodic signals for periods shorter than 600 days is common for these four time series (period of 450 days), and the type of noise determined by the Allan variance is a white noise for the four series. To investigate methods of regularizing the series, we look at a Singular Spectrum Analysis-derived method and at the Kalman filter. The two methods adequately reproduce the tendency of the nutation time series, but the resulting series are noisier using the Singular Spectrum Analysis-derived method.
Thermodynamics of regular accelerating black holes
NASA Astrophysics Data System (ADS)
Astorino, Marco
2017-03-01
Using the covariant phase space formalism, we compute the conserved charges for a solution, describing an accelerating and electrically charged Reissner-Nordstrom black hole. The metric is regular provided that the acceleration is driven by an external electric field, in spite of the usual string of the standard C-metric. The Smarr formula and the first law of black hole thermodynamics are fulfilled. The resulting mass has the same form of the Christodoulou-Ruffini irreducible mass. On the basis of these results, we can extrapolate the mass and thermodynamics of the rotating C-metric, which describes a Kerr-Newman-(A)dS black hole accelerated by a pulling string.
Conformal regularization of Einstein's field equations
NASA Astrophysics Data System (ADS)
Röhr, Niklas; Uggla, Claes
2005-09-01
To study asymptotic structures, we regularize Einstein's field equations by means of conformal transformations. The conformal factor is chosen so that it carries a dimensional scale that captures crucial asymptotic features. By choosing a conformal orthonormal frame, we obtain a coupled system of differential equations for a set of dimensionless variables, associated with the conformal dimensionless metric, where the variables describe ratios with respect to the chosen asymptotic scale structure. As examples, we describe some explicit choices of conformal factors and coordinates appropriate for the situation of a timelike congruence approaching a singularity. One choice is shown to just slightly modify the so-called Hubble-normalized approach, and one leads to dimensionless first-order symmetric hyperbolic equations. We also discuss differences and similarities with other conformal approaches in the literature, as regards, e.g., isotropic singularities.
Regularity of inviscid shell models of turbulence
NASA Astrophysics Data System (ADS)
Constantin, Peter; Levant, Boris; Titi, Edriss S.
2007-01-01
In this paper we continue the analytical study of the sabra shell model of energy turbulent cascade. We prove the global existence of weak solutions of the inviscid sabra shell model, and show that these solutions are unique for some short interval of time. In addition, we prove that the solutions conserve energy, provided that the components of the solution satisfy ∣un∣≤Ckn-1/3[nlog(n+1)]-1 for some positive absolute constant C , which is the analog of the Onsager’s conjecture for the Euler’s equations. Moreover, we give a Beal-Kato-Majda type criterion for the blow-up of solutions of the inviscid sabra shell model and show the global regularity of the solutions in the “two-dimensional” parameters regime.
Regularity of free boundaries a heuristic retro
Caffarelli, Luis A.; Shahgholian, Henrik
2015-01-01
This survey concerns regularity theory of a few free boundary problems that have been developed in the past half a century. Our intention is to bring up different ideas and techniques that constitute the fundamentals of the theory. We shall discuss four different problems, where approaches are somewhat different in each case. Nevertheless, these problems can be divided into two groups: (i) obstacle and thin obstacle problem; (ii) minimal surfaces, and cavitation flow of a perfect fluid. In each case, we shall only discuss the methodology and approaches, giving basic ideas and tools that have been specifically designed and tailored for that particular problem. The survey is kept at a heuristic level with mainly geometric interpretation of the techniques and situations in hand. PMID:26261372
ERIC Educational Resources Information Center
Brown, Joyceanne; And Others
1991-01-01
This survey of 201 regular education teachers found that the most frequently used prereferral strategies used to facilitate classroom adjustment and achievement were consultation with other professionals, parent conferences, and behavior management techniques. Elementary teachers implemented more strategies than secondary-level teachers.…
Black hole mimickers: Regular versus singular behavior
Lemos, Jose P. S.; Zaslavskii, Oleg B.
2008-07-15
Black hole mimickers are possible alternatives to black holes; they would look observationally almost like black holes but would have no horizon. The properties in the near-horizon region where gravity is strong can be quite different for both types of objects, but at infinity it could be difficult to discern black holes from their mimickers. To disentangle this possible confusion, we examine the near-horizon properties, and their connection with far away asymptotic properties, of some candidates to black mimickers. We study spherically symmetric uncharged or charged but nonextremal objects, as well as spherically symmetric charged extremal objects. Within the uncharged or charged but nonextremal black hole mimickers, we study nonextremal {epsilon}-wormholes on the threshold of the formation of an event horizon, of which a subclass are called black foils, and gravastars. Within the charged extremal black hole mimickers we study extremal {epsilon}-wormholes on the threshold of the formation of an event horizon, quasi-black holes, and wormholes on the basis of quasi-black holes from Bonnor stars. We elucidate whether or not the objects belonging to these two classes remain regular in the near-horizon limit. The requirement of full regularity, i.e., finite curvature and absence of naked behavior, up to an arbitrary neighborhood of the gravitational radius of the object enables one to rule out potential mimickers in most of the cases. A list ranking the best black hole mimickers up to the worst, both nonextremal and extremal, is as follows: wormholes on the basis of extremal black holes or on the basis of quasi-black holes, quasi-black holes, wormholes on the basis of nonextremal black holes (black foils), and gravastars. Since in observational astrophysics it is difficult to find extremal configurations (the best mimickers in the ranking), whereas nonextremal configurations are really bad mimickers, the task of distinguishing black holes from their mimickers seems to
Regularization for Atmospheric Temperature Retrieval Problems
NASA Technical Reports Server (NTRS)
Velez-Reyes, Miguel; Galarza-Galarza, Ruben
1997-01-01
Passive remote sensing of the atmosphere is used to determine the atmospheric state. A radiometer measures microwave emissions from earth's atmosphere and surface. The radiance measured by the radiometer is proportional to the brightness temperature. This brightness temperature can be used to estimate atmospheric parameters such as temperature and water vapor content. These quantities are of primary importance for different applications in meteorology, oceanography, and geophysical sciences. Depending on the range in the electromagnetic spectrum being measured by the radiometer and the atmospheric quantities to be estimated, the retrieval or inverse problem of determining atmospheric parameters from brightness temperature might be linear or nonlinear. In most applications, the retrieval problem requires the inversion of a Fredholm integral equation of the first kind making this an ill-posed problem. The numerical solution of the retrieval problem requires the transformation of the continuous problem into a discrete problem. The ill-posedness of the continuous problem translates into ill-conditioning or ill-posedness of the discrete problem. Regularization methods are used to convert the ill-posed problem into a well-posed one. In this paper, we present some results of our work in applying different regularization techniques to atmospheric temperature retrievals using brightness temperatures measured with the SSM/T-1 sensor. Simulation results are presented which show the potential of these techniques to improve temperature retrievals. In particular, no statistical assumptions are needed and the algorithms were capable of correctly estimating the temperature profile corner at the tropopause independent of the initial guess.
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Regularization of Instantaneous Frequency Attribute Computations
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
The connection between regularization operators and support vector kernels.
Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert
1998-06-01
In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.
Error analysis for matrix elastic-net regularization algorithms.
Li, Hong; Chen, Na; Li, Luoqing
2012-05-01
Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.
Preparation of Regular Specimens for Atom Probes
NASA Technical Reports Server (NTRS)
Kuhlman, Kim; Wishard, James
2003-01-01
A method of preparation of specimens of non-electropolishable materials for analysis by atom probes is being developed as a superior alternative to a prior method. In comparison with the prior method, the present method involves less processing time. Also, whereas the prior method yields irregularly shaped and sized specimens, the present developmental method offers the potential to prepare specimens of regular shape and size. The prior method is called the method of sharp shards because it involves crushing the material of interest and selecting microscopic sharp shards of the material for use as specimens. Each selected shard is oriented with its sharp tip facing away from the tip of a stainless-steel pin and is glued to the tip of the pin by use of silver epoxy. Then the shard is milled by use of a focused ion beam (FIB) to make the shard very thin (relative to its length) and to make its tip sharp enough for atom-probe analysis. The method of sharp shards is extremely time-consuming because the selection of shards must be performed with the help of a microscope, the shards must be positioned on the pins by use of micromanipulators, and the irregularity of size and shape necessitates many hours of FIB milling to sharpen each shard. In the present method, a flat slab of the material of interest (e.g., a polished sample of rock or a coated semiconductor wafer) is mounted in the sample holder of a dicing saw of the type conventionally used to cut individual integrated circuits out of the wafers on which they are fabricated in batches. A saw blade appropriate to the material of interest is selected. The depth of cut and the distance between successive parallel cuts is made such that what is left after the cuts is a series of thin, parallel ridges on a solid base. Then the workpiece is rotated 90 and the pattern of cuts is repeated, leaving behind a square array of square posts on the solid base. The posts can be made regular, long, and thin, as required for samples
Compression and regularization with the information bottleneck
NASA Astrophysics Data System (ADS)
Strouse, Dj; Schwab, David
Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer.
Determinants of Scanpath Regularity in Reading.
von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan
2015-09-01
Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Information theoretic regularization in diffuse optical tomography.
Panagiotou, Christos; Somayajula, Sangeetha; Gibson, Adam P; Schweiger, Martin; Leahy, Richard M; Arridge, Simon R
2009-05-01
Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering the parameters of interest involves solving a nonlinear and highly ill-posed inverse problem. This paper examines the possibility of regularizing DOT via the introduction of a priori information from alternative high-resolution anatomical modalities, using the information theory concepts of mutual information (MI) and joint entropy (JE). Such functionals evaluate the similarity between the reconstructed optical image and the prior image while bypassing the multimodality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the two modalities. By introducing structural information, we aim to improve the spatial resolution and quantitative accuracy of the solution. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results using numerical simulations. In addition we compare the performance of MI and JE. Finally, we have adopted a method for fast marginal entropy evaluation and optimization by modifying the objective function and extending it to the JE case. We demonstrate its use on an image reconstruction framework and show significant computational savings.
Flip to Regular Triangulation and Convex Hull.
Gao, Mingcen; Cao, Thanh-Tung; Tan, Tiow-Seng
2017-02-01
Flip is a simple and local operation to transform one triangulation to another. It makes changes only to some neighboring simplices, without considering any attribute or configuration global in nature to the triangulation. Thanks to this characteristic, several flips can be independently applied to different small, non-overlapping regions of one triangulation. Such operation is favored when designing algorithms for data-parallel, massively multithreaded hardware, such as the GPU. However, most existing flip algorithms are designed to be executed sequentially, and usually need some restrictions on the execution order of flips, making them hard to be adapted to parallel computation. In this paper, we present an in depth study of flip algorithms in low dimensions, with the emphasis on the flexibility of their execution order. In particular, we propose a series of provably correct flip algorithms for regular triangulation and convex hull in 2D and 3D, with implementations for both CPUs and GPUs. Our experiment shows that our GPU implementation for constructing these structures from a given point set achieves up to two orders of magnitude of speedup over other popular single-threaded CPU implementation of existing algorithms.
Temporal Regularity of the Environment Drives Time Perception
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be varied. In one experiment, we asked whether the last stimulus was regular or not. We found that participants exposed to an irregular environment frequently reported perfectly regularly paced stimuli to be irregular. In a second experiment, we asked participants to judge whether the final stimulus was presented before or after a flash. In this way, we were able to determine distortions in temporal perception as changes in the timing necessary for the sound and the flash to be perceived synchronous. We found that within a regular context, the perceived timing of deviant last stimuli changed so that the relative anisochrony appeared to be perceptually decreased. In the irregular context, the perceived timing of irregular stimuli following a regular sequence was not affected. These observations suggest that humans use temporal expectations to evaluate the regularity of sequences and that expectations are combined with sensory stimuli to adapt perceived timing to follow the statistics of the environment. Expectations can be seen as a-priori probabilities on which perceived timing of stimuli depend. PMID:27441686
On reductibility of degenerate optimization problems to regular operator equations
NASA Astrophysics Data System (ADS)
Bednarczuk, E. M.; Tretyakov, A. A.
2016-12-01
We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.
Transient Lunar Phenomena: Regularity and Reality
NASA Astrophysics Data System (ADS)
Crotts, Arlin P. S.
2009-05-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ~50% of reports originate from near Aristarchus, ~16% from Plato, ~6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a "feature" as defined). TLP count consistency for these features indicates that ~80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Elementary Particle Spectroscopy in Regular Solid Rewrite
NASA Astrophysics Data System (ADS)
Trell, Erik
2008-10-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it "is the likely keystone of a fundamental computational foundation" also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)×O(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Elementary Particle Spectroscopy in Regular Solid Rewrite
Trell, Erik
2008-10-17
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
Crotts, Arlin P. S.
2009-05-20
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: {approx}50% of reports originate from near Aristarchus, {approx}16% from Plato, {approx}6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that {approx}80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Analysis of regularized Navier-Stokes equations. I, II
NASA Technical Reports Server (NTRS)
Ou, Yuh-Roung; Sritharan, S. S.
1991-01-01
A regularized form of the conventional Navier-Stokes equations is analyzed. The global existence and uniqueness are established for two classes of generalized solutions. It is shown that the solution of this regularized system converges to the solution of the conventional Navier-Stokes equations for low Reynolds numbers. Particular attention is given to the structure of attractors characterizing the solutions. Both local and global invariant manifolds are found, and the regularity properties of these manifolds are analyzed.
Regularization of multiplicative iterative algorithms with nonnegative constraint
NASA Astrophysics Data System (ADS)
Benvenuto, Federico; Piana, Michele
2014-03-01
This paper studies the regularization of the constrained maximum likelihood iterative algorithms applied to incompatible ill-posed linear inverse problems. Specifically, we introduce a novel stopping rule which defines a regularization algorithm for the iterative space reconstruction algorithm in the case of least-squares minimization. Further we show that the same rule regularizes the expectation maximization algorithm in the case of Kullback-Leibler minimization, provided a well-justified modification of the definition of Tikhonov regularization is introduced. The performances of this stopping rule are illustrated in the case of an image reconstruction problem in the x-ray solar astronomy.
Lagrangian averaging, nonlinear waves, and shock regularization
NASA Astrophysics Data System (ADS)
Bhat, Harish S.
In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity
The residual method for regularizing ill-posed problems
Grasmair, Markus; Haltmeier, Markus; Scherzer, Otmar
2011-01-01
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances, which can also be applied to non-convex regularization functionals. We provide three examples that show the applicability of our theory. The first example is the regularized solution of linear operator equations on Lp-spaces, where we show that the results of Tikhonov regularization generalize unchanged to the residual method. As a second example, we consider the problem of density estimation from a finite number of sampling points, using the Wasserstein distance as a fidelity term and an entropy measure as regularization term. It is shown that the densities obtained in this way depend continuously on the location of the sampled points and that the underlying density can be recovered as the number of sampling points tends to infinity. Finally, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions. PMID:22345828
Fundamental and Regular Elementary Schools: Do Differences Exist?
ERIC Educational Resources Information Center
Weber, Larry J.; And Others
This study compared the academic achievement and other outcomes of three public fundamental elementary schools with three regular elementary schools in a metropolitan school district. Modeled after the John Marshal Fundamental School in Pasadena, California, which opened in the fall of 1973, fundamental schools differ from regular schools in that…
Analysis of regularized Navier-Stokes equations, 2
NASA Technical Reports Server (NTRS)
Ou, Yuh-Roung; Sritharan, S. S.
1989-01-01
A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.
20 CFR 216.13 - Regular current connection test.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of...
Regular and Special Educators Inservice: A Model of Cooperative Effort.
ERIC Educational Resources Information Center
van Duyne, H. John; And Others
The Regular Education Inservice Program (REIT) at Bowling Green State University (Ohio) assists instructional resource centers (IRC's) and local educational agencies (LEA's) in developing and implementing inservice non-degree programs which respond to the mandates of Public Law 94-142. The target population is regular education personnel working…
Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears
ERIC Educational Resources Information Center
Chen, Sau-Chin; Hu, Jon-Fan
2015-01-01
Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…
39 CFR 3010.7 - Schedule of regular rate changes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Schedule of regular rate changes. 3010.7 Section... PRODUCTS General Provisions § 3010.7 Schedule of regular rate changes. (a) The Postal Service shall... estimated implementation dates for future Type 1-A rate changes for each separate class of mail, should...
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Cable television system regular monitoring. 76... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable television system regular monitoring. Cable television operators transmitting carriers in the frequency bands...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 3 2013-07-01 2013-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.500 - Artificial regular rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Artificial regular rates. 778.500 Section 778.500 Labor... Circumvent the Act Devices to Evade the Overtime Requirements § 778.500 Artificial regular rates. (a) Since... of his compensation. Payment for overtime on the basis of an artificial “regular” rate will...
29 CFR 778.408 - The specified regular rate.
Code of Federal Regulations, 2014 CFR
2014-07-01
... employee's compensation. Suppose, for example, that the compensation of an employee is normally made up in...) with an employee whose regular weekly earnings are made up in part by the payment of regular bonuses... compensation, over and above the guaranteed amount, by way of extra premiums for work on holidays, or...
Regular expression order-sorted unification and matching
Kutsia, Temur; Marin, Mircea
2015-01-01
We extend order-sorted unification by permitting regular expression sorts for variables and in the domains of function symbols. The obtained signature corresponds to a finite bottom-up unranked tree automaton. We prove that regular expression order-sorted (REOS) unification is of type infinitary and decidable. The unification problem presented by us generalizes some known problems, such as, e.g., order-sorted unification for ranked terms, sequence unification, and word unification with regular constraints. Decidability of REOS unification implies that sequence unification with regular hedge language constraints is decidable, generalizing the decidability result of word unification with regular constraints to terms. A sort weakening algorithm helps to construct a minimal complete set of REOS unifiers from the solutions of sequence unification problems. Moreover, we design a complete algorithm for REOS matching, and show that this problem is NP-complete and the corresponding counting problem is #P-complete. PMID:26523088
Nonconvex regularizations in fluorescence molecular tomography for sparsity enhancement
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Li, Changqing
2014-06-01
In vivo fluorescence imaging has been a popular functional imaging modality in preclinical imaging. Near infrared probes used in fluorescence molecular tomography (FMT) are designed to localize in the targeted tissues, hence sparse solution to the FMT image reconstruction problem is preferred. Nonconvex regularization methods are reported to enhance sparsity in the fields of statistical learning, compressed sensing etc. We investigated such regularization methods in FMT for small animal imaging with numerical simulations and phantom experiments. We adopted a majorization-minimization algorithm for the iterative reconstruction process and compared the reconstructed images using our proposed nonconvex regularizations with those using the well known L1 regularization. We found that the proposed nonconvex methods outperform L1 regularization in accurately recovering sparse targets in FMT.
Encoding of configural regularity in the human visual system.
Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P
2014-08-13
The visual system is very efficient in encoding stimulus properties by utilizing available regularities in the inputs. To explore the underlying encoding strategies during visual information processing, we presented participants with two-line configurations that varied in the amount of configural regularity (or degrees of freedom in the relative positioning of the two lines) in a fMRI experiment. Configural regularity ranged from a generic configuration to stimuli resembling an "L" (i.e., a right-angle L-junction), a "T" (i.e., a right-angle midpoint T-junction), or a "+",-the latter being the most regular stimulus. We found that the response strength in the shape-selective lateral occipital area was consistently lower for a higher degree of regularity in the stimuli. In the second experiment, using multivoxel pattern analysis, we further show that regularity is encoded in terms of the fMRI signal strength but not in the distributed pattern of responses. Finally, we found that the results of these experiments could not be accounted for by low-level stimulus properties and are distinct from norm-based encoding. Our results suggest that regularity plays an important role in stimulus encoding in the ventral visual processing stream.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Learning rates of lq coefficient regularization learning with gaussian kernel.
Lin, Shaobo; Zeng, Jinshan; Fang, Jian; Xu, Zongben
2014-10-01
Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l(q) regularization schemes with 0 < q < ∞ are central in use. It is known that different q leads to different properties of the deduced estimators, say, l(2) regularization leads to a smooth estimator, while l(1) regularization leads to a sparse estimator. Then how the generalization capability of l(q) regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l(q) coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all 0 < q < ∞. That is, the upper and lower bounds of learning rates for l(q) regularization learning are asymptotically identical for all 0 < q < ∞. Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.
Regular Patterns in Cerebellar Purkinje Cell Simple Spike Trains
Shin, Soon-Lim; Hoebeek, Freek E.; Schonewille, Martijn; De Zeeuw, Chris I.; Aertsen, Ad; De Schutter, Erik
2007-01-01
Background Cerebellar Purkinje cells (PC) in vivo are commonly reported to generate irregular spike trains, documented by high coefficients of variation of interspike-intervals (ISI). In strong contrast, they fire very regularly in the in vitro slice preparation. We studied the nature of this difference in firing properties by focusing on short-term variability and its dependence on behavioral state. Methodology/Principal Findings Using an analysis based on CV2 values, we could isolate precise regular spiking patterns, lasting up to hundreds of milliseconds, in PC simple spike trains recorded in both anesthetized and awake rodents. Regular spike patterns, defined by low variability of successive ISIs, comprised over half of the spikes, showed a wide range of mean ISIs, and were affected by behavioral state and tactile stimulation. Interestingly, regular patterns often coincided in nearby Purkinje cells without precise synchronization of individual spikes. Regular patterns exclusively appeared during the up state of the PC membrane potential, while single ISIs occurred both during up and down states. Possible functional consequences of regular spike patterns were investigated by modeling the synaptic conductance in neurons of the deep cerebellar nuclei (DCN). Simulations showed that these regular patterns caused epochs of relatively constant synaptic conductance in DCN neurons. Conclusions/Significance Our findings indicate that the apparent irregularity in cerebellar PC simple spike trains in vivo is most likely caused by mixing of different regular spike patterns, separated by single long intervals, over time. We propose that PCs may signal information, at least in part, in regular spike patterns to downstream DCN neurons. PMID:17534435
Blind image deblurring with edge enhancing total variation regularization
NASA Astrophysics Data System (ADS)
Shi, Yu; Hong, Hanyu; Song, Jie; Hua, Xia
2015-04-01
Blind image deblurring is an important issue. In this paper, we focus on solving this issue by constrained regularization method. Motivated by the importance of edges to visual perception, the edge-enhancing indicator is introduced to constrain the total variation regularization, and the bilateral filter is used for edge-preserving smoothing. The proposed edge enhancing regularization method aims to smooth preferably within each region and preserve edges. Experiments on simulated and real motion blurred images show that the proposed method is competitive with recent state-of-the-art total variation methods.
Universality in the flooding of regular islands by chaotic states.
Bäcker, Arnd; Ketzmerick, Roland; Monastra, Alejandro G
2007-06-01
We investigate the structure of eigenstates in systems with a mixed phase space in terms of their projection onto individual regular tori. Depending on dynamical tunneling rates and the Heisenberg time, regular states disappear and chaotic states flood the regular tori. For a quantitative understanding we introduce a random matrix model. The resulting statistical properties of eigenstates as a function of an effective coupling strength are in very good agreement with numerical results for a kicked system. We discuss the implications of these results for the applicability of the semiclassical eigenfunction hypothesis.
Exploring the spectrum of regularized bosonic string theory
Ambjørn, J. Makeenko, Y.
2015-03-15
We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.
Low-Rank Matrix Factorization With Adaptive Graph Regularizer.
Lu, Gui-Fu; Wang, Yong; Zou, Jian
2016-05-01
In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Some results on the spectra of strongly regular graphs
NASA Astrophysics Data System (ADS)
Vieira, Luís António de Almeida; Mano, Vasco Moço
2016-06-01
Let G be a strongly regular graph whose adjacency matrix is A. We associate a real finite dimensional Euclidean Jordan algebra 𝒱, of rank three to the strongly regular graph G, spanned by I and the natural powers of A, endowed with the Jordan product of matrices and with the inner product as being the usual trace of matrices. Finally, by the analysis of the binomial Hadamard series of an element of 𝒱, we establish some inequalities on the parameters and on the spectrum of a strongly regular graph like those established in theorems 3 and 4.
Quaternion regularization and stabilization of perturbed central motion. II
NASA Astrophysics Data System (ADS)
Chelnokov, Yu. N.
1993-04-01
Generalized regular quaternion equations for the three-dimensional two-body problem in terms of Kustaanheimo-Stiefel variables are obtained within the framework of the quaternion theory of regularizing and stabilizing transformations of the Newtonian equations for perturbed central motion. Regular quaternion equations for perturbed central motion of a material point in a central field with a certain potential Pi are also derived in oscillatory and normal forms. In addition, systems of perturbed central motion equations are obtained which include quaternion equations of perturbed orbit orientations in oscillatory or normal form, and a generalized Binet equation is derived. A comparative analysis of the equations is carried out.
A novel regularized edge-preserving super-resolution algorithm
NASA Astrophysics Data System (ADS)
Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng
2013-09-01
Using super-resolution (SR) technology is a good approach to obtain high-resolution infrared image. However, Image super-resolution reconstruction is essentially an ill-posed problem, it is important to design an effective regularization term (image prior). Gaussian prior is widely used in the regularization term, but the reconstructed SR image becomes over-smoothness. Here, a novel regularization term called non-local means (NLM) term is derived based on the assumption that the natural image content is likely to repeat itself within some neighborhood. In the proposed framework, the estimated high image is obtained by minimizing a cost function. The iteration method is applied to solve the optimum problem. With the progress of iteration, the regularization term is adaptively updated. The proposed algorithm has been tested in several experiments. The experimental results show that the proposed approach is robust and can reconstruct higher quality images both in quantitative term and perceptual effect.
Thermodynamical Stability of a New Regular Black Hole
NASA Astrophysics Data System (ADS)
Saadat, Hassan
2013-09-01
In this paper we consider a new regular black hole and calculate thermodynamical variables such as entropy, specific heat and free energy. Then we study thermodynamical stability of this black hole by using the specific heat in constant volume.
Spelling-stress regularity effects are intact in developmental dyslexia.
Mundy, Ian R; Carroll, Julia M
2013-01-01
The current experiment investigated conflicting predictions regarding the effects of spelling-stress regularity on the lexical decision performance of skilled adult readers and adults with developmental dyslexia. In both reading groups, lexical decision responses were significantly faster and significantly more accurate when the orthographic structure of a word ending was a reliable as opposed to an unreliable predictor of lexical stress assignment. Furthermore, the magnitude of this spelling-stress regularity effect was found to be equivalent across reading groups. These findings are consistent with intact phoneme-level regularity effects also observed in dyslexia. The paper discusses how findings of intact spelling-sound regularity effects at both prosodic and phonemic levels, as well as other similar results, can be reconciled with the obvious difficulties that people with dyslexia experience in other domains of phonological processing.
39 CFR 3010.7 - Schedule of regular rate changes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... shall display the Schedule for Regular and Predictable Rate Changes on the Commission Web site, http... of mailers of each class of mail in developing the schedule. (e) Whenever the Postal Service deems...
Regularized Chapman-Enskog expansion for scalar conservation laws
NASA Technical Reports Server (NTRS)
Schochet, Steven; Tadmor, Eitan
1990-01-01
Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.
Mini-Stroke vs. Regular Stroke: What's the Difference?
... How is a ministroke different from a regular stroke? Answers from Jerry W. Swanson, M.D. When ... brain, spinal cord or retina, which may cause stroke-like symptoms but does not damage brain cells ...
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
On almost regularity and π-normality of topological spaces
NASA Astrophysics Data System (ADS)
Saad Thabit, Sadeq Ali; Kamarulhaili, Hailiza
2012-05-01
π-Normality is a weaker version of normality. It was introduced by Kalantan in 2008. π-Normality lies between normality and almost normality (resp. quasi-normality). The importance of this topological property is that it behaves slightly different from normality and almost normality (quasi-normality). π-Normality is neither a productive nor a hereditary property in general. In this paper, some properties of almost regular spaces are presented. In particular, a few results on almost regular spaces are improved. Some relationships between almost regularity and π-normality are presented. π-Generalized closed sets are used to obtain a characterization and preservation theorems of π-normal spaces. Also, we investigate that an almost regular Lindelöf space (resp. with σ-locally finite base) is not necessarily π-normal by giving two counterexamples. An almost normality of the Rational Sequence topology is proved.
Are Pupils in Special Education Too "Special" for Regular Education?
NASA Astrophysics Data System (ADS)
Pijl, Ysbrand J.; Pijl, Sip J.
1998-01-01
In the Netherlands special needs pupils are often referred to separate schools for the Educable Mentally Retarded (EMR) or the Learning Disabled (LD). There is an ongoing debate on how to reduce the growing numbers of special education placements. One of the main issues in this debate concerns the size of the difference in cognitive abilities between pupils in regular education and those eligible for LD or EMR education. In this study meta-analysis techniques were used to synthesize the findings from 31 studies on differences between pupils in regular primary education and those in special education in the Netherlands. Studies were grouped into three categories according to the type of measurements used: achievement, general intelligence and neuropsychological tests. It was found that pupils in regular education and those in special education differ in achievement and general intelligence. Pupils in schools for the educable mentally retarded in particular perform at a much lower level than is common in regular Dutch primary education.
Loop Invariants, Exploration of Regularities, and Mathematical Games.
ERIC Educational Resources Information Center
Ginat, David
2001-01-01
Presents an approach for illustrating, on an intuitive level, the significance of loop invariants for algorithm design and analysis. The illustration is based on mathematical games that require the exploration of regularities via problem-solving heuristics. (Author/MM)
On maximal parabolic regularity for non-autonomous parabolic operators
NASA Astrophysics Data System (ADS)
Disser, Karoline; ter Elst, A. F. M.; Rehberg, Joachim
2017-02-01
We consider linear inhomogeneous non-autonomous parabolic problems associated to sesquilinear forms, with discontinuous dependence of time. We show that for these problems, the property of maximal parabolic regularity can be extrapolated to time integrability exponents r ≠ 2. This allows us to prove maximal parabolic Lr-regularity for discontinuous non-autonomous second-order divergence form operators in very general geometric settings and to prove existence results for related quasilinear equations.
Deaths in the UK Regular Armed Forces 2006
2007-03-30
DEATHS IN THE UK REGULAR ARMED FORCES 2006 INTRODUCTION • This National Statistic Notice provides summary statistics on deaths in 2006...categories of cause of death for 2006 (Table 2 and Figure 2). • Several changes have been made in the presentation of data from previous years. As...the Brigade of Gurkhas is part of the regular Army this Notice has been amended to include both the numbers of deaths for Gurkhas and the age
Regular satellite formation and evolution in a dead zone
NASA Astrophysics Data System (ADS)
Chen, Cheng; Martin, Rebecca G.
2017-01-01
The dead zone in a circumplanetary disk is a non-turbulent region at the disk midplane that is an ideal location for regular satellite formation. The lower viscosity in the dead zone allows small objects to accrete and grow. We model the evolution of a circumplanetary disk with a dead zone for a range of disk and dead zone parameters. We investigate how these affect the formation and subsequent evolution of regular satellites that form in the disk.
Iterative CT reconstruction using shearlet-based regularization
NASA Astrophysics Data System (ADS)
Vandeghinste, Bert; Goossens, Bart; Van Holen, Roel; Vanhove, Christian; Pizurica, Aleksandra; Vandenberghe, Stefaan; Staelens, Steven
2012-03-01
In computerized tomography, it is important to reduce the image noise without increasing the acquisition dose. Extensive research has been done into total variation minimization for image denoising and sparse-view reconstruction. However, TV minimization methods show superior denoising performance for simple images (with little texture), but result in texture information loss when applied to more complex images. Since in medical imaging, we are often confronted with textured images, it might not be beneficial to use TV. Our objective is to find a regularization term outperforming TV for sparse-view reconstruction and image denoising in general. A recent efficient solver was developed for convex problems, based on a split-Bregman approach, able to incorporate regularization terms different from TV. In this work, a proof-of-concept study demonstrates the usage of the discrete shearlet transform as a sparsifying transform within this solver for CT reconstructions. In particular, the regularization term is the 1-norm of the shearlet coefficients. We compared our newly developed shearlet approach to traditional TV on both sparse-view and on low-count simulated and measured preclinical data. Shearlet-based regularization does not outperform TV-based regularization for all datasets. Reconstructed images exhibit small aliasing artifacts in sparse-view reconstruction problems, but show no staircasing effect. This results in a slightly higher resolution than with TV-based regularization.
The relationship between lifestyle regularity and subjective sleep quality
NASA Technical Reports Server (NTRS)
Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.
2003-01-01
In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.
NASA Astrophysics Data System (ADS)
Chen, De-Han; Hofmann, Bernd; Zou, Jun
2017-01-01
We consider the ill-posed operator equation Ax = y with an injective and bounded linear operator A mapping between {{\\ell}2} and a Hilbert space Y, possessing the unique solution {{x}\\dagger}=≤ft\\{{{x}\\dagger}k\\right\\}k=1∞ . For the cases that sparsity {{x}\\dagger}\\in {{\\ell}0} is expected but often slightly violated in practice, we investigate in comparison with the {{\\ell}1} -regularization the elastic-net regularization, where the penalty is a weighted superposition of the {{\\ell}1} -norm and the {{\\ell}2} -norm square, under the assumption that {{x}\\dagger}\\in {{\\ell}1} . There occur two positive parameters in this approach, the weight parameter η and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in {{\\ell}1} -regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator A and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay {{x}\\dagger}k\\to 0 for k\\to ∞ and the classical smoothness properties of {{x}\\dagger} as an element in {{\\ell}2} .
Seiler, Jurgen; Jonscher, Markus; Schöberl, Michael; Kaup, André
2015-11-01
Even though image signals are typically defined on a regular 2D grid, there also exist many scenarios where this is not the case and the amplitude of the image signal only is available for a non-regular subset of pixel positions. In such a case, a resampling of the image to a regular grid has to be carried out. This is necessary since almost all algorithms and technologies for processing, transmitting or displaying image signals rely on the samples being available on a regular grid. Thus, it is of great importance to reconstruct the image on this regular grid, so that the reconstruction comes closest to the case that the signal has been originally acquired on the regular grid. In this paper, Frequency Selective Reconstruction is introduced for solving this challenging task. This algorithm reconstructs image signals by exploiting the property that small areas of images can be represented sparsely in the Fourier domain. By further considering the basic properties of the optical transfer function of imaging systems, a sparse model of the signal is iteratively generated. In doing so, the proposed algorithm is able to achieve a very high reconstruction quality, in terms of peak signal-to-noise ratio (PSNR) and structural similarity measure as well as in terms of visual quality. The simulation results show that the proposed algorithm is able to outperform state-of-the-art reconstruction algorithms and gains of more than 1 dB PSNR are possible.
Nondissipative Velocity and Pressure Regularizations for the ICON Model
NASA Astrophysics Data System (ADS)
Restelli, M.; Giorgetta, M.; Hundertmark, T.; Korn, P.; Reich, S.
2009-04-01
A challenging aspect in the numerical simulation of atmospheric and oceanic flows is the multiscale character of the problem both in space and time. The small spacial scales are generated by the turbulent energy and enstrophy cascades, and are usually dealt with by means of turbulence parametrizations, while the small temporal scales are governed by the propagation of acoustic and gravity waves, which are of little importance for the large scale dynamics and are often eliminated by means of a semi-implicit time discretization. We propose to treat both phenomena of subgrid turbulence and temporal scale separation in a unified way by means of nondissipative regularizations of the underlying model equations. More precisely, we discuss the use of two regularized equation sets: the velocity regularization, also know as Lagrangian averaged Navier-Stokes system, and the pressure regularization. Both regularizations are nondissipative since they do not enhance the dissipation of energy and enstrophy of the flow. The velocity regularization models the effects of the subgrid velocity fluctuations on the mean flow, it has thus been proposed as a turbulence parametrization and it has been found to yield promising results in ocean modeling [HHPW08]. In particular, the velocity regularization results in a higher variability of the numerical solution. The pressure regularization, discussed in [RWS07], modifies the propagation of acoustic and gravity waves so that the resulting system can be discretized explicitly in time with time steps analogous to those allowed by a semi-implicit method. Compared to semi-implicit time integrators, however, the pressure regularization takes fully into account the geostrophic balance of the flow. We discuss here the implementation of the velocity and pressure regularizations within the numerical framework of the ICON general circulation model (GCM) [BR05] for the case of the rotating shallow water system, showing how the original numerical
Regular treatment with salmeterol for chronic asthma: serious adverse events
Cates, Christopher J; Cates, Matthew J
2014-01-01
Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular salmeterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and FDA submissions in relation to salmeterol. The date of the most recent search was August 2011. Selection criteria We included controlled parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular salmeterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 26 trials comparing salmeterol to placebo and eight trials comparing with salbutamol. These included 62,815 participants with asthma (including 2,599 children). In six trials (2,766 patients), no serious adverse event data could be obtained. All-cause mortality was higher with regular salmeterol than placebo but the increase was not significant (Peto odds ratio (OR) 1.33 (95% CI 0.85 to 2.08)). Non-fatal serious adverse events were significantly increased when regular salmeterol was compared with placebo (OR 1.15 95% CI 1.02 to 1.29). One extra serious adverse event occurred over 28 weeks for every 188 people
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used
NASA Astrophysics Data System (ADS)
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir; Zhou, Chuan
2012-03-01
Digital breast tomosynthesis (DBT) holds strong promise for improving the sensitivity of detecting subtle mass lesions. Detection of microcalcifications is more difficult because of high noise and subtle signals in the large DBT volume. It is important to enhance the contrast-to-noise ratio (CNR) of microcalcifications in DBT reconstruction. A major challenge of implementing microcalcification enhancement or noise regularization in DBT reconstruction is to preserve the image quality of masses, especially those with ill-defined margins and subtle spiculations. We are developing a new multiscale regularization (MSR) method for the simultaneous algebraic reconstruction technique (SART) to improve the CNR of microcalcifications without compromising the quality of masses. Each DBT slice is stratified into different frequency bands via wavelet decomposition and the regularization method applies different degrees of regularization to different frequency bands to preserve features of interest and suppress noise. Regularization is constrained by a characteristic map to avoid smoothing subtle microcalcifications. The characteristic map is generated via image feature analysis to identify potential microcalcification locations in the DBT volume. The MSR method was compared to the non-convex total pvariation (TpV) method and SART with no regularization (NR) in terms of the CNR and the full width at half maximum of the line profiles intersecting calcifications and mass spiculations in DBT of human subjects. The results demonstrated that SART regularized by the MSR method was superior to the TpV method for subtle microcalcifications in terms of CNR enhancement. The MSR method preserved the quality of subtle spiculations better than the TpV method in comparison to NR.
Another look at statistical learning theory and regularization.
Cherkassky, Vladimir; Ma, Yunqian
2009-09-01
The paper reviews and highlights distinctions between function-approximation (FA) and VC theory and methodology, mainly within the setting of regression problems and a squared-error loss function, and illustrates empirically the differences between the two when data is sparse and/or input distribution is non-uniform. In FA theory, the goal is to estimate an unknown true dependency (or 'target' function) in regression problems, or posterior probability P(y/x) in classification problems. In VC theory, the goal is to 'imitate' unknown target function, in the sense of minimization of prediction risk or good 'generalization'. That is, the result of VC learning depends on (unknown) input distribution, while that of FA does not. This distinction is important because regularization theory originally introduced under clearly stated FA setting [Tikhonov, N. (1963). On solving ill-posed problem and method of regularization. Doklady Akademii Nauk USSR, 153, 501-504; Tikhonov, N., & V. Y. Arsenin (1977). Solution of ill-posed problems. Washington, DC: W. H. Winston], has been later used under risk-minimization or VC setting. More recently, several authors [Evgeniou, T., Pontil, M., & Poggio, T. (2000). Regularization networks and support vector machines. Advances in Computational Mathematics, 13, 1-50; Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference and prediction. Springer; Poggio, T. and Smale, S., (2003). The mathematics of learning: Dealing with data. Notices of the AMS, 50 (5), 537-544] applied constructive methodology based on regularization framework to learning dependencies from data (under VC-theoretical setting). However, such regularization-based learning is usually presented as a purely constructive methodology (with no clearly stated problem setting). This paper compares FA/regularization and VC/risk minimization methodologies in terms of underlying theoretical assumptions. The control of model
Early family regularity protects against later disruptive behavior.
Rijlaarsdam, Jolien; Tiemeier, Henning; Ringoot, Ank P; Ivanova, Masha Y; Jaddoe, Vincent W V; Verhulst, Frank C; Roza, Sabine J
2016-07-01
Infants' temperamental anger or frustration reactions are highly stable, but are also influenced by maturation and experience. It is yet unclear why some infants high in anger or frustration reactions develop disruptive behavior problems whereas others do not. We examined family regularity, conceptualized as the consistency of mealtime and bedtime routines, as a protective factor against the development of oppositional and aggressive behavior. This study used prospectively collected data from 3136 families participating in the Generation R Study. Infant anger or frustration reactions and family regularity were reported by mothers when children were ages 6 months and 2-4 years, respectively. Multiple informants (parents, teachers, and children) and methods (questionnaire and interview) were used in the assessment of children's oppositional and aggressive behavior at age 6. Higher levels of family regularity were associated with lower levels of child aggression independent of temperamental anger or frustration reactions (β = -0.05, p = 0.003). The association between child oppositional behavior and temperamental anger or frustration reactions was moderated by family regularity and child gender (β = 0.11, p = 0.046): family regularity reduced the risk for oppositional behavior among those boys who showed anger or frustration reactions in infancy. In conclusion, family regularity reduced the risk for child aggression and showed a gender-specific protective effect against child oppositional behavior associated with anger or frustration reactions. Families that ensured regularity of mealtime and bedtime routines buffered their infant sons high in anger or frustration reactions from developing oppositional behavior.
Particle motion and Penrose processes around rotating regular black hole
NASA Astrophysics Data System (ADS)
Abdujabbarov, Ahmadjon
2016-07-01
The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.
Regular treatment with formoterol for chronic asthma: serious adverse events
Cates, Christopher J; Cates, Matthew J
2014-01-01
Background Epidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about possible causal links for this association, and whether regular (daily) long-acting beta2-agonists are safe. Objectives The aim of this review is to assess the risk of fatal and non-fatal serious adverse events in trials that randomised patients with chronic asthma to regular formoterol versus placebo or regular short-acting beta2-agonists. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data and Food and Drug Administration (FDA) submissions in relation to formoterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel design clinical trials on patients of any age and severity of asthma if they randomised patients to treatment with regular formoterol and were of at least 12 weeks’ duration. Concomitant use of inhaled corticosteroids was allowed, as long as this was not part of the randomised treatment regimen. Data collection and analysis Two authors independently selected trials for inclusion in the review. One author extracted outcome data and the second author checked them. We sought unpublished data on mortality and serious adverse events. Main results The review includes 22 studies (8032 participants) comparing regular formoterol to placebo and salbutamol. Non-fatal serious adverse event data could be obtained for all participants from published studies comparing formoterol and placebo but only 80% of those comparing formoterol with salbutamol or terbutaline. Three deaths occurred on regular formoterol and none on placebo; this difference was not statistically significant. It was not possible to assess disease-specific mortality in view of the small number of deaths. Non-fatal serious adverse events were significantly increased when
Spatially variant regularization of lateral displacement measurement using variance.
Sumi, Chikayoshi; Itoh, Toshiki
2009-05-01
The purpose of this work is to confirm the effectiveness of our proposed spatially variant displacement component-dependent regularization for our previously developed ultrasonic two-dimensional (2D) displacement vector measurement methods, i.e., 2D cross-spectrum phase gradient method (CSPGM), 2D autocorrelation method (AM), and 2D Doppler method (DM). Generally, the measurement accuracy of lateral displacement spatially varies and the accuracy is lower than that of axial displacement that is accurate enough. This inaccurate measurement causes an instability in a 2D shear modulus reconstruction. Thus, the spatially variant lateral displacement regularization using the lateral displacement variance will be effective in obtaining an accurate lateral strain measurement and a stable shear modulus reconstruction than a conventional spatially uniform regularization. The effectiveness is verified through agar phantom experiments. The agar phantom [60mm (height) x 100 mm (lateral width) x 40 mm (elevational width)] that has, at a depth of 10mm, a circular cylindrical inclusion (dia.=10mm) of a higher shear modulus (2.95 and 1.43 x 10(6)N/m(2), i.e., relative shear modulus, 2.06) is compressed in the axial direction from the upper surface of the phantom using a commercial linear array type transducer that has a nominal frequency of 7.5-MHz. Because a contrast-to-noise ratio (CNR) expresses the detectability of the inhomogeneous region in the lateral strain image and further has almost the same sense as that of signal-to-noise ratio (SNR) for strain measurement, the obtained results show that the proposed spatially variant lateral displacement regularization yields a more accurate lateral strain measurement as well as a higher detectability in the lateral strain image (e.g., CNRs and SNRs for 2D CSPGM, 2.36 vs 2.27 and 1.74 vs 1.71, respectively). Furthermore, the spatially variant lateral displacement regularization yields a more stable and more accurate 2D shear modulus
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4
Nonparametric Tikhonov Regularized NMF and Its Application in Cancer Clustering.
Mirzal, Andri
2014-01-01
The Tikhonov regularized nonnegative matrix factorization (TNMF) is an NMF objective function that enforces smoothness on the computed solutions, and has been successfully applied to many problem domains including text mining, spectral data analysis, and cancer clustering. There is, however, an issue that is still insufficiently addressed in the development of TNMF algorithms, i.e., how to develop mechanisms that can learn the regularization parameters directly from the data sets. The common approach is to use fixed values based on a priori knowledge about the problem domains. However, from the linear inverse problems study it is known that the quality of the solutions of the Tikhonov regularized least square problems depends heavily on the choosing of appropriate regularization parameters. Since least squares are the building blocks of the NMF, it can be expected that similar situation also applies to the NMF. In this paper, we propose two formulas to automatically learn the regularization parameters from the data set based on the L-curve approach. We also develop a convergent algorithm for the TNMF based on the additive update rules. Finally, we demonstrate the use of the proposed algorithm in cancer clustering tasks.
X-ray computed tomography using curvelet sparse regularization
Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.
2015-04-15
Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Rotating Hayward's regular black hole as particle accelerator
NASA Astrophysics Data System (ADS)
Amir, Muhammed; Ghosh, Sushant G.
2015-07-01
Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy ( E CM) when the collision takes place near the horizon. The rotating Hayward's regular black hole, apart from Mass ( M) and angular momentum ( a), has a new parameter g ( g > 0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M = 1, there exist critical a E and r {/H E }, which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r {/H E } increases with increase in g. While a < a E describe a regular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward's regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM, which increases with the deviation parameter g.
Regularization of inverse planning for intensity-modulated radiotherapy.
Chvetsov, Alexei V; Calvetti, Daniela; Sohn, Jason W; Kinsella, Timothy J
2005-02-01
The performance of a variational regularization technique to improve robustness of inverse treatment planning for intensity modulated radiotherapy is analyzed and tested. Inverse treatment planning is based on the numerical solutions to the Fredholm integral equation of the first kind which is ill-posed. Therefore, a fundamental problem with inverse treatment planning is that it may exhibit instabilities manifested in nonphysical oscillations in the beam intensity functions. To control the instabilities, we consider a variational regularization technique which can be applied for the methods which minimize a quadratic objective function. In this technique, the quadratic objective function is modified by adding of a stabilizing functional that allows for arbitrary order regularization. An optimal form of stabilizing functional is selected which allows for both regularization and good approximation of beam intensity functions. The regularized optimization algorithm is shown, by comparison for a typical case of a head-and-neck cancer treatment, to be significantly more accurate and robust than the standard approach, particularly for the smaller beamlet sizes.
Incorporating anatomical side information into PET reconstruction using nonlocal regularization.
Nguyen, Van-Giang; Lee, Soo-Jin
2013-10-01
With the introduction of combined positron emission tomography (PET)/computed tomography (CT) or PET/magnetic resonance imaging (MRI) scanners, there is an increasing emphasis on reconstructing PET images with the aid of the anatomical side information obtained from X-ray CT or MRI scanners. In this paper, we propose a new approach to incorporating prior anatomical information into PET reconstruction using the nonlocal regularization method. The nonlocal regularizer developed for this application is designed to selectively consider the anatomical information only when it is reliable. As our proposed nonlocal regularization method does not directly use anatomical edges or boundaries which are often used in conventional methods, it is not only free from additional processes to extract anatomical boundaries or segmented regions, but also more robust to the signal mismatch problem that is caused by the indirect relationship between the PET image and the anatomical image. We perform simulations with digital phantoms. According to our experimental results, compared to the conventional method based on the traditional local regularization method, our nonlocal regularization method performs well even with the imperfect prior anatomical information or in the presence of signal mismatch between the PET image and the anatomical image.
Alternating Direction Method of Multiplier for Tomography With Nonlocal Regularizers
Dewaraja, Yuni K.; Fessler, Jeffrey A.
2015-01-01
The ordered subset expectation maximization (OSEM) algorithm approximates the gradient of a likelihood function using a subset of projections instead of using all projections so that fast image reconstruction is possible for emission and transmission tomography such as SPECT, PET, and CT. However, OSEM does not significantly accelerate reconstruction with computationally expensive regularizers such as patch-based nonlocal (NL) regularizers, because the regularizer gradient is evaluated for every subset. We propose to use variable splitting to separate the likelihood term and the regularizer term for penalized emission tomographic image reconstruction problem and to optimize it using the alternating direction method of multiplier (ADMM). We also propose a fast algorithm to optimize the ADMM parameter based on convergence rate analysis. This new scheme enables more sub-iterations related to the likelihood term. We evaluated our ADMM for 3-D SPECT image reconstruction with a patch-based NL regularizer that uses the Fair potential function. Our proposed ADMM improved the speed of convergence substantially compared to other existing methods such as gradient descent, EM, and OSEM using De Pierro’s approach, and the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. PMID:25291351
Regularization methods for near-field acoustical holography.
Williams, E G
2001-10-01
The reconstruction of the pressure and normal surface velocity provided by near-field acoustical holography (NAH) from pressure measurements made near a vibrating structure is a linear, ill-posed inverse problem due to the existence of strongly decaying, evanescentlike waves. Regularization provides a technique of overcoming the ill-posedness and generates a solution to the linear problem in an automated way. We present four robust methods for regularization; the standard Tikhonov procedure along with a novel improved version, Landweber iteration, and the conjugate gradient approach. Each of these approaches can be applied to all forms of interior or exterior NAH problems; planar, cylindrical, spherical, and conformal. We also study two parameter selection procedures, the Morozov discrepancy principle and the generalized cross validation, which are crucial to any regularization theory. In particular, we concentrate here on planar and cylindrical holography. These forms of NAH which rely on the discrete Fourier transform are important due to their popularity and to their tremendous computational speed. In order to use regularization theory for the separable geometry problems we reformulate the equations of planar, cylindrical, and spherical NAH into an eigenvalue problem. The resulting eigenvalues and eigenvectors couple easily to regularization theory, which can be incorporated into the NAH software with little sacrifice in computational speed. The resulting complete automation of the NAH algorithm for both separable and nonseparable geometries overcomes the last significant hurdle for NAH.
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
Enumeration of Extended m-Regular Linear Stacks.
Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian
2016-12-01
The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].
Regularized friction and continuation: Comparison with Coulomb's law
NASA Astrophysics Data System (ADS)
Vigué, Pierre; Vergez, Christophe; Karkar, Sami; Cochelin, Bruno
2017-02-01
Periodic solutions of systems with friction are difficult to investigate because of the non-smooth nature of friction laws. This paper examines periodic solutions and most notably stick-slip, on a simple one-degree-of-freedom system (mass, spring, damper, and belt), with Coulomb's friction law, and with a regularized friction law (i.e. the friction coefficient becomes a function of relative speed, with a stiffness parameter). With Coulomb's law, the stick-slip solution is constructed step by step, which gives a usable existence condition. With the regularized law, the Asymptotic Numerical Method and the Harmonic Balance Method provide bifurcation diagrams with respect to the belt speed or normal force, and for several values of the regularization parameter. Formulations from the Coulomb case give the means of a comparison between regularized solutions and a standard reference. With an appropriate definition, regularized stick-slip motion exists, its amplitude increases with respect to the belt speed and its pulsation decreases with respect to the normal force.
Fast multislice fluorescence molecular tomography using sparsity-inducing regularization
NASA Astrophysics Data System (ADS)
Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba
2016-02-01
Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.
Radial basis function networks and complexity regularization in function learning.
Krzyzak, A; Linder, T
1998-01-01
In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.
Manufacture of Regularly Shaped Sol-Gel Pellets
NASA Technical Reports Server (NTRS)
Leventis, Nicholas; Johnston, James C.; Kinder, James D.
2006-01-01
An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.
Context effects on orthographic learning of regular and irregular words.
Wang, Hua-Chen; Castles, Anne; Nickels, Lyndsey; Nation, Kate
2011-05-01
The self-teaching hypothesis proposes that orthographic learning takes place via phonological decoding in meaningful texts, that is, in context. Context is proposed to be important in learning to read, especially when decoding is only partial. However, little research has directly explored this hypothesis. The current study looked at the effect of context on orthographic learning and examined whether there were different effects for novel words given regular and irregular pronunciations. Two experiments were conducted using regular and irregular novel words, respectively. Second-grade children were asked to learn eight novel words either in stories or in a list of words. The results revealed no significant effect of context for the regular items. However, in an orthographic decision task, there was a facilitatory effect of context on irregular novel word learning. The findings support the view that contextual information is important to orthographic learning, but only when the words to be learned contain irregular spelling-sound correspondences.
Analysis of the "Learning in Regular Classrooms" movement in China.
Deng, M; Manset, G
2000-04-01
The Learning in Regular Classrooms experiment has evolved in response to China's efforts to educate its large population of students with disabilities who, until the mid-1980s, were denied a free education. In the Learning in Regular Classrooms, students with disabilities (primarily sensory impairments or mild mental retardation) are educated in neighborhood schools in mainstream classrooms. Despite difficulties associated with developing effective inclusive programming, this approach has contributed to a major increase in the enrollment of students with disabilities and increased involvement of schools, teachers, and parents in China's newly developing special education system. Here we describe the development of the Learning in Regular Classroom approach and the challenges associated with educating students with disabilities in China.
Wavelet domain image restoration with adaptive edge-preserving regularization.
Belge, M; Kilmer, M E; Miller, E L
2000-01-01
In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.
Breast ultrasound tomography with total-variation regularization
Huang, Lianjie; Li, Cuiping; Duric, Neb
2009-01-01
Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.
Consistent regularization and renormalization in models with inhomogeneous phases
NASA Astrophysics Data System (ADS)
Adhikari, Prabal; Andersen, Jens O.
2017-02-01
In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Structural characterization of the packings of granular regular polygons
NASA Astrophysics Data System (ADS)
Wang, Chuncheng; Dong, Kejun; Yu, Aibing
2015-12-01
By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.
Gamma regularization based reconstruction for low dose CT.
Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis
2015-09-07
Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l0-norm and l1-norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms.
Local conservative regularizations of compressible magnetohydrodynamic and neutral flows
NASA Astrophysics Data System (ADS)
Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.
2016-02-01
Ideal systems like magnetohydrodynamics (MHD) and Euler flow may develop singularities in vorticity ( w =∇×v ). Viscosity and resistivity provide dissipative regularizations of the singularities. In this paper, we propose a minimal, local, conservative, nonlinear, dispersive regularization of compressible flow and ideal MHD, in analogy with the KdV regularization of the 1D kinematic wave equation. This work extends and significantly generalizes earlier work on incompressible Euler and ideal MHD. It involves a micro-scale cutoff length λ which is a function of density, unlike in the incompressible case. In MHD, it can be taken to be of order the electron collisionless skin depth c/ωpe. Our regularization preserves the symmetries of the original systems and, with appropriate boundary conditions, leads to associated conservation laws. Energy and enstrophy are subject to a priori bounds determined by initial data in contrast to the unregularized systems. A Hamiltonian and Poisson bracket formulation is developed and applied to generalize the constitutive relation to bound higher moments of vorticity. A "swirl" velocity field is identified, and shown to transport w/ρ and B/ρ, generalizing the Kelvin-Helmholtz and Alfvén theorems. The steady regularized equations are used to model a rotating vortex, MHD pinch, and a plane vortex sheet. The proposed regularization could facilitate numerical simulations of fluid/MHD equations and provide a consistent statistical mechanics of vortices/current filaments in 3D, without blowup of enstrophy. Implications for detailed analyses of fluid and plasma dynamic systems arising from our work are briefly discussed.
Zigzag stacks and m-regular linear stacks.
Chen, William Y C; Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian
2014-12-01
The contact map of a protein fold is a graph that represents the patterns of contacts in the fold. It is known that the contact map can be decomposed into stacks and queues. RNA secondary structures are special stacks in which the degree of each vertex is at most one and each arc has length of at least two. Waterman and Smith derived a formula for the number of RNA secondary structures of length n with exactly k arcs. Höner zu Siederdissen et al. developed a folding algorithm for extended RNA secondary structures in which each vertex has maximum degree two. An equation for the generating function of extended RNA secondary structures was obtained by Müller and Nebel by using a context-free grammar approach, which leads to an asymptotic formula. In this article, we consider m-regular linear stacks, where each arc has length at least m and the degree of each vertex is bounded by two. Extended RNA secondary structures are exactly 2-regular linear stacks. For any m ≥ 2, we obtain an equation for the generating function of the m-regular linear stacks. For given m, we deduce a recurrence relation and an asymptotic formula for the number of m-regular linear stacks on n vertices. To establish the equation, we use the reduction operation of Chen, Deng, and Du to transform an m-regular linear stack to an m-reduced zigzag (or alternating) stack. Then we find an equation for m-reduced zigzag stacks leading to an equation for m-regular linear stacks.
UV radiation transmittance: regular clothing versus sun-protective clothing.
Bielinski, Kenneth; Bielinski, Nolan
2014-09-01
There are many clothing options available for patients who are interested in limiting their exposure to UV radiation; however, these options can be confusing for patients. For dermatologists, there is limited clinical data regarding the advantages, if any, of sun-protective clothing. In this study, we examined the UV radiation transmittance of regular clothing versus sun-protective clothing. We found that regular clothing may match or even exceed sun-protective clothing in blocking the transmittance of UV radiation. These data will help dermatologists better counsel their patients on clothing options for sun protection.
Lifshitz anomalies, Ward identities and split dimensional regularization
NASA Astrophysics Data System (ADS)
Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia
2017-03-01
We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2 + 1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.
Construction of regular black holes in general relativity
NASA Astrophysics Data System (ADS)
Fan, Zhong-Ying; Wang, Xiaobao
2016-12-01
We present a general procedure for constructing exact black hole solutions with electric or magnetic charges in general relativity coupled to a nonlinear electrodynamics. We obtain a variety of two-parameter family spherically symmetric black hole solutions. In particular, the singularity at the center of the space-time can be canceled in the parameter space and the black hole solutions become regular everywhere in space-time. We study the global properties of the solutions and derive the first law of thermodynamics. We also generalize the procedure to include a cosmological constant and construct regular black hole solutions that are asymptotic to anti-de Sitter space-time.
Regular bouncing cosmological solutions in effective actions in four dimensions
NASA Astrophysics Data System (ADS)
Constantinidis, C. P.; Fabris, J. C.; Furtado, R. G.; Picco, M.
2000-02-01
We study cosmological scenarios resulting from effective actions in four dimensions which are, under some assumptions, connected with multidimensional, supergravity and string theories. These effective actions are labeled by the parameters ω, the dilaton coupling constant, and n which establishes the coupling between the dilaton and a scalar field originating from the gauge field existing in the original theories. There is a large class of bouncing as well as Friedmann-like solutions. We investigate under which conditions bouncing regular solutions can be obtained. In the case of the string effective action, regularity is obtained through the inclusion of contributions from the Ramond-Ramond sector of superstring.
The structure of split regular BiHom-Lie algebras
NASA Astrophysics Data System (ADS)
Calderón, Antonio J.; Sánchez, José M.
2016-12-01
We introduce the class of split regular BiHom-Lie algebras as the natural extension of the one of split Hom-Lie algebras and so of split Lie algebras. We show that an arbitrary split regular BiHom-Lie algebra L is of the form L = U +∑jIj with U a linear subspace of a fixed maximal abelian subalgebra H and any Ij a well described (split) ideal of L, satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.
One-way regular electromagnetic mode immune to backscattering.
Deng, Xiaohua; Hong, Lujun; Zheng, Xiaodong; Shen, Linfang
2015-05-10
In this paper, we present a basic model of robust one-way electromagnetic modes at microwave frequencies, which is formed by a semi-infinite gyromagnetic yttrium-iron-garnet with dielectric cladding terminated by a metal plate. It is shown that this system supports not only one-way surface magnetoplasmons (SMPs) but also a one-way regular mode, which is guided by the mechanism of total internal reflection. Like one-way SMPs, the one-way regular mode can be immune to backscattering, and two types of one-way modes together make up a complete dispersion band for the system.
Adding Asymmetrically Dominated Alternatives: Violations of Regularity & the Similarity Hypothesis.
1981-07-01
statistically significant ( McNemar Test, Siegel 1956) at a p. < 0.05 level. Technically, however, the test of regularity should code switching to the de- coy as...who switched between target and competitor 63% switched to the target, 37% to the competitor McNemar Test: =(28 )/109 = 7.2 , p4.05 4. Grouping those...who switched to the decoy with the competitor (for a strong test of regularity) 59% switched to the target while 41% switched away McNemar Test: l
Nonlinear run-ups of regular waves on sloping structures
NASA Astrophysics Data System (ADS)
Hsu, T.-W.; Liang, S.-J.; Young, B.-D.; Ou, S.-H.
2012-12-01
For coastal risk mapping, it is extremely important to accurately predict wave run-ups since they influence overtopping calculations; however, nonlinear run-ups of regular waves on sloping structures are still not accurately modeled. We report the development of a high-order numerical model for regular waves based on the second-order nonlinear Boussinesq equations (BEs) derived by Wei et al. (1995). We calculated 160 cases of wave run-ups of nonlinear regular waves over various slope structures. Laboratory experiments were conducted in a wave flume for regular waves propagating over three plane slopes: tan α =1/5, 1/4, and 1/3. The numerical results, laboratory observations, as well as previous datasets were in good agreement. We have also proposed an empirical formula of the relative run-up in terms of two parameters: the Iribarren number ξ and sloping structures tan α. The prediction capability of the proposed formula was tested using previous data covering the range ξ ≤ 3 and 1/5 ≤ tan α ≤ 1/2 and found to be acceptable. Our study serves as a stepping stone to investigate run-up predictions for irregular waves and more complex geometries of coastal structures.
Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing
ERIC Educational Resources Information Center
Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.
2012-01-01
Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…
75 FR 1057 - Farm Credit Administration Board; Regular Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-08
... [Federal Register Volume 75, Number 5 (Friday, January 8, 2010)] [Notices] [Page 1057] [FR Doc No: 2010-246] FARM CREDIT ADMINISTRATION Farm Credit Administration Board; Regular Meeting AGENCY: Farm Credit Administration. SUMMARY: Notice is hereby given, pursuant to the Government in the Sunshine Act...
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Regularized Partial and/or Constrained Redundancy Analysis
ERIC Educational Resources Information Center
Takane, Yoshio; Jung, Sunho
2008-01-01
Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time,…
An Interesting Lemma for Regular C-fractions
NASA Astrophysics Data System (ADS)
Chen, Kwang-Wu
2003-12-01
In this short note we give an interesting lemma for regular C-fractions. Applying this lemma we obtain some congruence properties of some classical numbers such as the Springer numbers of even index, the median Euler numbers, the median Genocchi numbers, and the tangent numbers.
Psychological Benefits of Regular Physical Activity: Evidence from Emerging Adults
ERIC Educational Resources Information Center
Cekin, Resul
2015-01-01
Emerging adulthood is a transitional stage between late adolescence and young adulthood in life-span development that requires significant changes in people's lives. Therefore, identifying protective factors for this population is crucial. This study investigated the effects of regular physical activity on self-esteem, optimism, and happiness in…
Image super-resolution via adaptive filtering and regularization
NASA Astrophysics Data System (ADS)
Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming
2014-11-01
Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.
Regularization of open superstring from orientable closed surface
Frampton, P.H.; Kshirsagar, A.K.; Ng, Y.J.
1986-10-15
By tracing the one-loop annulus and Moebius diagrams to a common origin, as integration contours on a torus, the principal-part regularization of the open superstring is given some justification. The result hints at the possibility of a simple topological expansion for open superstrings.
47 CFR 76.614 - Cable television system regular monitoring.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Cable television system regular monitoring. 76.614 Section 76.614 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.614 Cable...
Regularization in Short-Term Memory for Serial Order
ERIC Educational Resources Information Center
Botvinick, Matthew; Bylsma, Lauren M.
2005-01-01
Previous research has shown that short-term memory for serial order can be influenced by background knowledge concerning regularities of sequential structure. Specifically, it has been shown that recall is superior for sequences that fit well with familiar sequencing constraints. The authors report a corresponding effect pertaining to serial…
Maximal regularity for perturbed integral equations on periodic Lebesgue spaces
NASA Astrophysics Data System (ADS)
Lizama, Carlos; Poblete, Verónica
2008-12-01
We characterize the maximal regularity of periodic solutions for an additive perturbed integral equation with infinite delay in the vector-valued Lebesgue spaces. Our method is based on operator-valued Fourier multipliers. We also study resonances, characterizing the existence of solutions in terms of a compatibility condition on the forcing term.
Implicit Learning of L2 Word Stress Regularities
ERIC Educational Resources Information Center
Chan, Ricky K. W.; Leung, Janny H. C.
2014-01-01
This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…
Elementary Teachers' Perspectives of Inclusion in the Regular Education Classroom
ERIC Educational Resources Information Center
Olinger, Becky Lorraine
2013-01-01
The purpose of this qualitative study was to examine regular education and special education teacher perceptions of inclusion services in an elementary school setting. In this phenomenological study, purposeful sampling techniques and data were used to conduct a study of inclusion in the elementary schools. In-depth one-to-one interviews with 8…
29 CFR 778.408 - The specified regular rate.
Code of Federal Regulations, 2010 CFR
2010-07-01
... POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS OVERTIME COMPENSATION Exceptions From the Regular Rate Principles Guaranteed Compensation Which Includes Overtime Pay § 778.408 The specified... reasonably be expected to be operative in controlling the employee's compensation. (c) The rate specified...
New Technologies in Portugal: Regular Middle and High School
ERIC Educational Resources Information Center
Florentino, Teresa; Sanchez, Lucas; Joyanes, Luis
2010-01-01
Purpose: The purpose of this paper is to elaborate upon the relation between information and communication technologies (ICT), particularly web-based resources, and their use, programs and learning in Portuguese middle and high regular public schools. Design/methodology/approach: Adding collected documentation on curriculum, laws and other related…
Rotating bearings in regular and irregular granular shear packings.
Aström, J A
2008-01-01
For 2D regular dense packings of solid mono-size non-sliding disks there is a mechanism for bearing formation under shear that can be explained theoretically. There is, however, no easy way to extend this model to include random dense packings which would better describe natural packings. A numerical model that simulates shear deformation for both near-regular and irregular packings is used to demonstrate that rotating bearings appear roughly with the same density in random and regular packings. The main difference appears in the size distribution of the rotating clusters near the jamming threshold. The size distribution is well described by a scaling form with a large-size cut-off that seems to grow without bounds for regular packings at the jamming threshold, while it remains finite for irregular packings. At packing densities above the jamming transition there can be no shear, unless the disks are allowed to break. Breaking of disks induces a large number of small local bearings. Clusters of rotating particles may contribute to e.g. pre-rupture yielding in landslides, snow avalanches and to the formation of aseismic gaps in tectonic fault zones.
Autocorrelation and Regularization of Query-Based Information Retrieval Scores
2008-02-01
projected scores. This problem has similar solutions to monolingual regularization. The iterative solution is, ft+1t = (1− α)yt + αStf t t (8.7) The... multilingual corpora. In Manuela M. Veloso, editor, IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial In- telligence
Identifying Basketball Performance Indicators in Regular Season and Playoff Games
García, Javier; Ibáñez, Sergio J.; De Santos, Raúl Martinez; Leite, Nuno; Sampaio, Jaime
2013-01-01
The aim of the present study was to identify basketball game performance indicators which best discriminate winners and losers in regular season and playoffs. The sample used was composed by 323 games of ACB Spanish Basketball League from the regular season (n=306) and from the playoffs (n=17). A previous cluster analysis allowed splitting the sample in balanced (equal or below 12 points), unbalanced (between 13 and 28 points) and very unbalanced games (above 28 points). A discriminant analysis was used to identify the performance indicators either in regular season and playoff games. In regular season games, the winning teams dominated in assists, defensive rebounds, successful 2 and 3-point field-goals. However, in playoff games the winning teams’ superiority was only in defensive rebounding. In practical applications, these results may help the coaches to accurately design training programs to reflect the importance of having different offensive set plays and also have specific conditioning programs to prepare for defensive rebounding. PMID:23717365
Analysis of Tikhonov regularization for function approximation by neural networks.
Burger, Martin; Neubauer, Andreas
2003-01-01
This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feed-forward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing and regularization by weight decay, as well as a combination of both methods to combine their advantages. We show that in all cases stable approximations are obtained converging to the approximated function in a desired Sobolev space as the noise in the data tends to zero (in the weaker L(2)-norm) if the regularization parameter and the number of units in the network are chosen appropriately. Under additional smoothness assumptions we are able to show convergence rates results in terms of the noise level and the number of units in the network. In addition, we show how the theoretical results can be applied to the important classes of perceptrons with one hidden layer and to translation networks. Finally, the performance of the different approaches is compared in some numerical examples.
New vision based navigation clue for a regular colonoscope's tip
NASA Astrophysics Data System (ADS)
Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy
2009-02-01
Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.
Interrupting Sitting Time with Regular Walks Attenuates Postprandial Triglycerides.
Miyashita, M; Edamoto, K; Kidokoro, T; Yanaoka, T; Kashiwabara, K; Takahashi, M; Burns, S
2016-02-01
We compared the effects of prolonged sitting with the effects of sitting interrupted by regular walking and the effects of prolonged sitting after continuous walking on postprandial triglyceride in postmenopausal women. 15 participants completed 3 trials in random order: 1) prolonged sitting, 2) regular walking, and 3) prolonged sitting preceded by continuous walking. During the sitting trial, participants rested for 8 h. For the walking trials, participants walked briskly in either twenty 90-sec bouts over 8 h or one 30-min bout in the morning (09:00-09:30). Except for walking, both exercise trials mimicked the sitting trial. In each trial, participants consumed a breakfast (08:00) and lunch (11:00). Blood samples were collected in the fasted state and at 2, 4, 6 and 8 h after breakfast. The serum triglyceride incremental area under the curve was 15 and 14% lower after regular walking compared with prolonged sitting and prolonged sitting after continuous walking (4.73±2.50 vs. 5.52±2.95 vs. 5.50±2.59 mmol/L∙8 h respectively, main effect of trial: P=0.023). Regularly interrupting sitting time with brief bouts of physical activity can reduce postprandial triglyceride in postmenopausal women.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, G.C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
Involving Impaired, Disabled, and Handicapped Persons in Regular Camp Programs.
ERIC Educational Resources Information Center
American Alliance for Health, Physical Education, and Recreation, Washington, DC. Information and Research Utilization Center.
The publication provides some broad guidelines for serving impaired, disabled, and handicapped children in nonspecialized or regular day and residential camps. Part One on the rationale and basis for integrated camping includes three chapters which cover mainstreaming and the normalization principle, the continuum of services (or Cascade System)…
32 CFR 901.14 - Regular airmen category.
Code of Federal Regulations, 2013 CFR
2013-07-01
... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...
32 CFR 901.14 - Regular airmen category.
Code of Federal Regulations, 2011 CFR
2011-07-01
... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS APPOINTMENT TO THE UNITED STATES AIR FORCE ACADEMY Nomination Procedures and Requirements § 901.14... Regular component of the Air Force may apply for nomination. Selectees must be in active duty...
The Student with Albinism in the Regular Classroom.
ERIC Educational Resources Information Center
Ashley, Julia Robertson
This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…
Simulated Administration of a Regular Guidance Operation (SARGO).
ERIC Educational Resources Information Center
Fredrickson, Ronald H.; Popken, Charles F.
Simulated Administration of a Regular Guidance Operation (SARGO) is a program for the training of directors of guidance and pupil personnel services. The objective of SARGO is to prepare directors of guidance services to: (1) prepare a written description of a pupil personnel program; (2) interact with a school administrator to clarify role…
Rotating bearings in regular and irregular granular shear packings
NASA Astrophysics Data System (ADS)
Ström, J. A. Ã.
2008-01-01
For 2D regular dense packings of solid mono-size non-sliding disks there is a mechanism for bearing formation under shear that can be explained theoretically. There is, however, no easy way to extend this model to include random dense packings which would better describe natural packings. A numerical model that simulates shear deformation for both near-regular and irregular packings is used to demonstrate that rotating bearings appear roughly with the same density in random and regular packings. The main difference appears in the size distribution of the rotating clusters near the jamming threshold. The size distribution is well described by a scaling form with a large-size cut-off that seems to grow without bounds for regular packings at the jamming threshold, while it remains finite for irregular packings. At packing densities above the jamming transition there can be no shear, unless the disks are allowed to break. Breaking of disks induces a large number of small local bearings. Clusters of rotating particles may contribute to e.g. pre-rupture yielding in landslides, snow avalanches and to the formation of aseismic gaps in tectonic fault zones.
Nonnative Processing of Verbal Morphology: In Search of Regularity
ERIC Educational Resources Information Center
Gor, Kira; Cook, Svetlana
2010-01-01
There is little agreement on the mechanisms involved in second language (L2) processing of regular and irregular inflectional morphology and on the exact role of age, amount, and type of exposure to L2 resulting in differences in L2 input and use. The article contributes to the ongoing debates by reporting the results of two experiments on Russian…
Regularity and Energy Conservation for the Compressible Euler Equations
NASA Astrophysics Data System (ADS)
Feireisl, Eduard; Gwiazda, Piotr; Świerczewska-Gwiazda, Agnieszka; Wiedemann, Emil
2017-03-01
We give sufficient conditions on the regularity of solutions to the inhomogeneous incompressible Euler and the compressible isentropic Euler systems in order for the energy to be conserved. Our strategy relies on commutator estimates similar to those employed by Constantin et al. for the homogeneous incompressible Euler equations.
Preverbal Infants Infer Intentional Agents from the Perception of Regularity
ERIC Educational Resources Information Center
Ma, Lili; Xu, Fei
2013-01-01
Human adults have a strong bias to invoke intentional agents in their intuitive explanations of ordered wholes or regular compositions in the world. Less is known about the ontogenetic origin of this bias. In 4 experiments, we found that 9-to 10-month-old infants expected a human hand, but not a mechanical tool with similar affordances, to be the…
Low thrust space vehicle trajectory optimization using regularized variables
NASA Technical Reports Server (NTRS)
Schwenzfeger, K. J.
1974-01-01
Optimizing the trajectory of a low thrust space vehicle usually means solving a nonlinear two point boundary value problem. In general, accuracy requirements necessitate extensive computation times. In celestial mechanics, regularizing transformations of the equations of motion are used to eliminate computational and analytical problems that occur during close approaches to gravitational force centers. It was shown in previous investigations that regularization in the formulation of the trajectory optimization problem may reduce the computation time. In this study, a set of regularized equations describing the optimal trajectory of a continuously thrusting space vehicle is derived. The computational characteristics of the set are investigated and compared to the classical Newtonian unregularized set of equations. The comparison is made for low thrust, minimum time, escape trajectories and numerical calculations of Keplerian orbits. The comparison indicates that in the cases investigated for bad initial guesses of the known boundary values a remarkable reduction in the computation time was achieved. Furthermore, the investigated set of regularized equations shows high numerical stability even for long duration flights and is less sensitive to errors in the guesses of the unknown boundary values.
Mainstreaming: Educable Mentally Retarded Children in Regular Classes.
ERIC Educational Resources Information Center
Birch, Jack W.
Described in the monograph are mainstreaming programs for educable mentally retarded (EMR) children in six variously sized school districts within five states. It is noted that mainstreaming is based on the principle of educating most children in the regular classroom and providing special education on the basis of learning needs rather than…
Regular Class Participation System (RCPS). A Final Report.
ERIC Educational Resources Information Center
Ferguson, Dianne L.; And Others
The Regular Class Participation System (RCPS) project attempted to develop, implement, and validate a system for placing and maintaining students with severe disabilities in general education classrooms, with a particular emphasis on achieving both social and learning outcomes for students. A teacher-based planning strategy was developed and…
The Visually Impaired Student in the Regular Classroom.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton.
The guide provides strategies for regular teachers to use with visually impaired (VI) students in the province of Alberta, Canada. After an introduction, definitions of terms such as "adventitiously blind" are presented. Next addressed are effects of visual impairment on cognitive development, emotional and social aspects, and…
Identifying and Exploiting Spatial Regularity in Data Memory References
Mohan, T; de Supinski, B R; McKee, S A; Mueller, F; Yoo, A; Schulz, M
2003-07-24
The growing processor/memory performance gap causes the performance of many codes to be limited by memory accesses. If known to exist in an application, strided memory accesses forming streams can be targeted by optimizations such as prefetching, relocation, remapping, and vector loads. Undetected, they can be a significant source of memory stalls in loops. Existing stream-detection mechanisms either require special hardware, which may not gather statistics for subsequent analysis, or are limited to compile-time detection of array accesses in loops. Formally, little treatment has been accorded to the subject; the concept of locality fails to capture the existence of streams in a program's memory accesses. The contributions of this paper are as follows. First, we define spatial regularity as a means to discuss the presence and effects of streams. Second, we develop measures to quantify spatial regularity, and we design and implement an on-line, parallel algorithm to detect streams - and hence regularity - in running applications. Third, we use examples from real codes and common benchmarks to illustrate how derived stream statistics can be used to guide the application of profile-driven optimizations. Overall, we demonstrate the benefits of our novel regularity metric as a low-cost instrument to detect potential for code optimizations affecting memory performance.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Cost Effectiveness of Premium Versus Regular Gasoline in MCPS Buses.
ERIC Educational Resources Information Center
Baacke, Clifford M.; Frankel, Steven M.
The primary question posed in this study is whether premium or regular gasoline is more cost effective for the Montgomery County Public School (MCPS) bus fleet, as a whole, when miles-per-gallon, cost-per-gallon, and repair costs associated with mileage are considered. On average, both miles-per-gallon, and repair costs-per-mile favor premium…
Regular Strongly Typical Blocks of {mathcal{O}^{mathfrak {q}}}
NASA Astrophysics Data System (ADS)
Frisk, Anders; Mazorchuk, Volodymyr
2009-10-01
We use the technique of Harish-Chandra bimodules to prove that regular strongly typical blocks of the category {mathcal{O}} for the queer Lie superalgebra {mathfrak{q}_n} are equivalent to the corresponding blocks of the category {mathcal{O}} for the Lie algebra {mathfrak {gl}_n}.
Adult Regularization of Inconsistent Input Depends on Pragmatic Factors
ERIC Educational Resources Information Center
Perfors, Amy
2016-01-01
In a variety of domains, adults who are given input that is only partially consistent do not discard the inconsistent portion (regularize) but rather maintain the probability of consistent and inconsistent portions in their behavior (probability match). This research investigates the possibility that adults probability match, at least in part,…
Statistical regularities in art: Relations with visual coding and perception.
Graham, Daniel J; Redies, Christoph
2010-07-21
Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study.
The Hearing Impaired Student in the Regular Classroom.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton.
The guide provides strategies for teachers to use with deaf and hearing impaired (HI) students in regular classrooms in the province of Alberta, Canada. An introductory section includes symptoms of a suspected hearing loss and a sample audiogram to aid teachers in recognizing the problem. Ways to meet special needs at different age levels are…
The Physically/Medically Handicapped Student in the Regular Classroom.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton.
The guide outlines modifications, adaptations, and social interaction approaches for school staff to use with physically handicapped and regular students in integrated classrooms in the province of Alberta, Canada. Guidelines are provided for the following main categories and subsets (in parentheses): lifting and transferring techniques (methods…
The properties of probabilistic simple regular sticker system
NASA Astrophysics Data System (ADS)
Selvarajoo, Mathuri; Fong, Wan Heng; Sarmin, Nor Haniza; Turaev, Sherzod
2015-10-01
A mathematical model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, has been introduced in 1998. In sticker system, the sticker operation is based on the Watson-Crick complementary feature of DNA molecules. The computation of sticker system starts from an incomplete double-stranded sequence. Then by iterative sticking operations, a complete double-stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rule (including the simple regular sticker system) generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of the languages generated by the sticker systems. In this paper, we study the properties of probabilistic simple regular sticker systems. In this variant of sticker system, probabilities are associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings. The language are selected according to some probabilistic requirements. We prove that the probabilistic enhancement increases the computational power of simple regular sticker systems.
From Numbers to Letters: Feedback Regularization in Visual Word Recognition
ERIC Educational Resources Information Center
Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel
2010-01-01
Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…
New Algorithms and Sparse Regularization for Synthetic Aperture Radar Imaging
2015-10-26
Demanet Department of Mathematics Massachusetts Institute of Technology. • Grant title: New Algorithms and Sparse Regularization for Synthetic Aperture...statistical analysis of one such method, the so-called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies...called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies the scaling of the phase transition
Regular and homeward travel speeds of arctic wolves
Mech, L.D.
1994-01-01
Single wolves (Canis lupus arctos), a pair, and a pack of five habituated to the investigator on an all-terrain vehicle were followed on Ellesmere Island, Northwest Territories, Canada, during summer. Their mean travel speed was measured on barren ground at 8.7 km/h during regular travel and 10.0 km/h when returning to a den.
Sparse regularization techniques provide novel insights into outcome integration processes.
Mohr, Holger; Wolfensteller, Uta; Frimmel, Steffi; Ruge, Hannes
2015-01-01
By exploiting information that is contained in the spatial arrangement of neural activations, multivariate pattern analysis (MVPA) can detect distributed brain activations which are not accessible by standard univariate analysis. Recent methodological advances in MVPA regularization techniques have made it feasible to produce sparse discriminative whole-brain maps with highly specific patterns. Furthermore, the most recent refinement, the Graph Net, explicitly takes the 3D-structure of fMRI data into account. Here, these advanced classification methods were applied to a large fMRI sample (N=70) in order to gain novel insights into the functional localization of outcome integration processes. While the beneficial effect of differential outcomes is well-studied in trial-and-error learning, outcome integration in the context of instruction-based learning has remained largely unexplored. In order to examine neural processes associated with outcome integration in the context of instruction-based learning, two groups of subjects underwent functional imaging while being presented with either differential or ambiguous outcomes following the execution of varying stimulus-response instructions. While no significant univariate group differences were found in the resulting fMRI dataset, L1-regularized (sparse) classifiers performed significantly above chance and also clearly outperformed the standard L2-regularized (dense) Support Vector Machine on this whole-brain between-subject classification task. Moreover, additional L2-regularization via the Elastic Net and spatial regularization by the Graph Net improved interpretability of discriminative weight maps but were accompanied by reduced classification accuracies. Most importantly, classification based on sparse regularization facilitated the identification of highly specific regions differentially engaged under ambiguous and differential outcome conditions, comprising several prefrontal regions previously associated with
A simple way to measure daily lifestyle regularity
NASA Technical Reports Server (NTRS)
Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.
2002-01-01
A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5
A theoretical foundation for multi-scale regular vegetation patterns.
Tarnita, Corina E; Bonachela, Juan A; Sheffer, Efrat; Guyton, Jennifer A; Coverdale, Tyler C; Long, Ryan A; Pringle, Robert M
2017-01-18
Self-organized regular vegetation patterns are widespread and thought to mediate ecosystem functions such as productivity and robustness, but the mechanisms underlying their origin and maintenance remain disputed. Particularly controversial are landscapes of overdispersed (evenly spaced) elements, such as North American Mima mounds, Brazilian murundus, South African heuweltjies, and, famously, Namibian fairy circles. Two competing hypotheses are currently debated. On the one hand, models of scale-dependent feedbacks, whereby plants facilitate neighbours while competing with distant individuals, can reproduce various regular patterns identified in satellite imagery. Owing to deep theoretical roots and apparent generality, scale-dependent feedbacks are widely viewed as a unifying and near-universal principle of regular-pattern formation despite scant empirical evidence. On the other hand, many overdispersed vegetation patterns worldwide have been attributed to subterranean ecosystem engineers such as termites, ants, and rodents. Although potentially consistent with territorial competition, this interpretation has been challenged theoretically and empirically and (unlike scale-dependent feedbacks) lacks a unifying dynamical theory, fuelling scepticism about its plausibility and generality. Here we provide a general theoretical foundation for self-organization of social-insect colonies, validated using data from four continents, which demonstrates that intraspecific competition between territorial animals can generate the large-scale hexagonal regularity of these patterns. However, this mechanism is not mutually exclusive with scale-dependent feedbacks. Using Namib Desert fairy circles as a case study, we present field data showing that these landscapes exhibit multi-scale patterning-previously undocumented in this system-that cannot be explained by either mechanism in isolation. These multi-scale patterns and other emergent properties, such as enhanced resistance to
Semi-regular biorthogonal pairs and generalized Riesz bases
NASA Astrophysics Data System (ADS)
Inoue, H.
2016-11-01
In this paper we introduce general theories of semi-regular biorthogonal pairs, generalized Riesz bases and its physical applications. Here we deal with biorthogonal sequences {ϕn} and {ψn} in a Hilbert space H , with domains D ( ϕ ) = { x ∈ H ; ∑ k = 0 ∞ |" separators=" ( x | ϕ k ) | 2 < ∞ } and D ( ψ ) = { x ∈ H ; ∑ k = 0 ∞ |" separators=" ( x | ψ k ) | 2 < ∞ } and linear spans Dϕ ≡ Span{ϕn} and Dψ ≡ Span{ψn}. A biorthogonal pair ({ϕn}, {ψn}) is called regular if both Dϕ and Dψ are dense in H , and it is called semi-regular if either Dϕ and D(ϕ) or Dψ and D(ψ) are dense in H . In a previous paper [H. Inoue, J. Math. Phys. 57, 083511 (2016)], we have shown that if ({ϕn}, {ψn}) is a regular biorthogonal pair then both {ϕn} and {ψn} are generalized Riesz bases defined in the work of Inoue and Takakura [J. Math. Phys. 57, 083505 (2016)]. Here we shall show that the same result holds true if the pair is only semi-regular by using operators Tϕ,e, Te,ϕ, Tψ,e, and Te,ψ defined by an orthonormal basis e in H and a biorthogonal pair ({ϕn}, {ψn}). Furthermore, we shall apply this result to pseudo-bosons in the sense of the papers of Bagarello [J. Math. Phys. 51, 023531 (2010); J. Phys. A 44, 015205 (2011); Phys. Rev. A 88, 032120 (2013); and J. Math. Phys. 54, 063512 (2013)].
Two vortex-blob regularization models for vortex sheet motion
NASA Astrophysics Data System (ADS)
Sohn, Sung-Ik
2014-04-01
Evolving vortex sheets generally form singularities in finite time. The vortex blob model is an approach to regularize the vortex sheet motion and evolve past singularity formation. In this paper, we thoroughly compare two such regularizations: the Krasny-type model and the Beale-Majda model. It is found from a linear stability analysis that both models have exponentially decaying growth rates for high wavenumbers, but the Beale-Majda model has a faster decaying rate than the Krasny model. The Beale-Majda model thus gives a stronger regularization to the solution. We apply the blob models to the two example problems: a periodic vortex sheet and an elliptically loaded wing. The numerical results show that the solutions of the two models are similar in large and small scales, but are fairly different in intermediate scales. The sheet of the Beale-Majda model has more spiral turns than the Krasny-type model for the same value of the regularization parameter δ. We give numerical evidences that the solutions of the two models agree for an increasing amount of spiral turns and tend to converge to the same limit as δ is decreased. The inner spiral turns of the blob models behave differently with the outer turns and satisfy a self-similar form. We also examine irregular motions of the sheet at late times and find that the irregular motions shrink as δ is decreased. This fact suggests a convergence of the blob solution to the weak solution of infinite regular spiral turns.
Separate Magnitude and Phase Regularization via Compressed Sensing
Noll, Douglas C.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Compressed sensing (CS) has been used for accelerating magnetic resonance imaging (MRI) acquisitions, but its use in applications with rapid spatial phase variations is challenging, e.g., proton resonance frequency shift (PRF-shift) thermometry and velocity mapping. Previously, an iterative MRI reconstruction with separate magnitude and phase regularization was proposed for applications where magnitude and phase maps are both of interest, but it requires fully sampled data and unwrapped phase maps. In this paper, CS is combined into this framework to reconstruct magnitude and phase images accurately from undersampled data. Moreover, new phase regularization terms are proposed to accommodate phase wrapping and to reconstruct images with encoded phase variations, e.g., PRF-shift thermometry and velocity mapping. The proposed method is demonstrated with simulated thermometry data and in-vivo velocity mapping data and compared to conventional phase corrected CS. PMID:22552571
Improved regularized solution of the inverse problem in turbidimetric measurements.
Mroczka, Janusz; Szczuczyński, Damian
2010-08-20
We present results of simulation research on the constrained regularized least-squares (RLS) solution of the ill-conditioned inverse problem in turbidimetric measurements. The problem is formulated in terms of the discretized Fredholm integral equation of the first kind. The inverse problem in turbidimetric measurements consists in determining particle size distribution (PSD) function of particulate system on the basis of turbidimetric measurements. The desired PSD should satisfy two constraints: nonnegativity of PSD values and normalization of PSD to unity when integrated over the whole range of particle size. Incorporating the constraints into the RLS method leads to the constrained regularized least-squares (CRLS) method, which is realized by means of an active set algorithm of quadratic programming. Results of simulation research prove that the CRLS method performs considerably better with reconstruction of PSD than the RLS method in terms of better fidelity and smaller uncertainty.
Regular Expression-Based Learning for METs Value Extraction.
Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E; Zeng-Treitler, Qing
2016-01-01
Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians.
Information transmission using non-poisson regular firing.
Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru
2013-04-01
In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.
Regularizing the r-mode Problem for Nonbarotropic Relativistic Stars
NASA Technical Reports Server (NTRS)
Lockitch, Keith H.; Andersson, Nils; Watts, Anna L.
2004-01-01
We present results for r-modes of relativistic nonbarotropic stars. We show that the main differential equation, which is formally singular at lowest order in the slow-rotation expansion, can be regularized if one considers the initial value problem rather than the normal mode problem. However, a more physically motivated way to regularize the problem is to include higher order terms. This allows us to develop a practical approach for solving the problem and we provide results that support earlier conclusions obtained for uniform density stars. In particular, we show that there will exist a single r-mode for each permissible combination of 1 and m. We discuss these results and provide some caveats regarding their usefulness for estimates of gravitational-radiation reaction timescales. The close connection between the seemingly singular relativistic r-mode problem and issues arising because of the presence of co-rotation points in differentially rotating stars is also clarified.
Statistical regularities in the rank-citation profile of scientists
NASA Astrophysics Data System (ADS)
Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro
2011-12-01
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
Giant regular polyhedra from calixarene carboxylates and uranyl
Pasquale, Sara; Sattin, Sara; Escudero-Adán, Eduardo C.; Martínez-Belmonte, Marta; de Mendoza, Javier
2012-01-01
Self-assembly of large multi-component systems is a common strategy for the bottom-up construction of discrete, well-defined, nanoscopic-sized cages. Icosahedral or pseudospherical viral capsids, built up from hundreds of identical proteins, constitute typical examples of the complexity attained by biological self-assembly. Chemical versions of the so-called 5 Platonic regular or 13 Archimedean semi-regular polyhedra are usually assembled combining molecular platforms with metals with commensurate coordination spheres. Here we report novel, self-assembled cages, using the conical-shaped carboxylic acid derivatives of calix[4]arene and calix[5]arene as ligands, and the uranyl cation UO22+ as a metallic counterpart, which coordinates with three carboxylates at the equatorial plane, giving rise to hexagonal bipyramidal architectures. As a result, octahedral and icosahedral anionic metallocages of nanoscopic dimensions are formed with an unusually small number of components. PMID:22510690
Compound L0 regularization method for image blind motion deblurring
NASA Astrophysics Data System (ADS)
Liu, Qiaohong; Sun, Liping; Shao, Zeguo
2016-09-01
Blind image deblurring is one of the challenging problems in image processing and computer vision. The main purpose of blind image deblurring is to estimate the correct blur kernel and restore the latent image with edge-preservation, details-protection, and ringing suppression. In order to achieve ideal results, an innovative compound L0-regularized model is proposed to estimate the blur kernel by regularizing the sparsity property of natural images and two characteristics of blur kernel, such as continuity and sparsity. In the alternating direction framework, the split Bregman algorithm and half-quadratic splitting rule are alternatively employed to optimize the proposed kernel estimation model. Finally, a nonblind restoration method with ringing suppression is developed to obtain the ultimate latent image. Extensive experiments demonstrate the efficiency and viability of the proposed method compared with some state-of-the-art blind deblurring methods.
Total Variation Regularization of Matrix-Valued Images
Christiansen, Oddvar; Lee, Tin-Man; Lie, Johan; Sinha, Usha; Chan, Tony F.
2007-01-01
We generalize the total variation restoration model, introduced by Rudin, Osher, and Fatemi in 1992, to matrix-valued data, in particular, to diffusion tensor images (DTIs). Our model is a natural extension of the color total variation model proposed by Blomgren and Chan in 1998. We treat the diffusion matrix D implicitly as the product D = LLT, and work with the elements of L as variables, instead of working directly on the elements of D. This ensures positive definiteness of the tensor during the regularization flow, which is essential when regularizing DTI. We perform numerical experiments on both synthetical data and 3D human brain DTI, and measure the quantitative behavior of the proposed model. PMID:18256729
Persistent low-grade inflammation and regular exercise.
Astrom, Maj-Briit; Feigh, Michael; Pedersen, Bente Klarlund
2010-01-01
Persistent low-grade systemic inflammation is a feature of chronic diseases such as cardiovascular disease (CVD), type 2 diabetes and dementia and evidence exists that inflammation is a causal factor in the development of insulin resistance and atherosclerosis. Regular exercise offers protection against all of these diseases and recent evidence suggests that the protective effect of exercise may to some extent be ascribed to an anti-inflammatory effect of regular exercise. Visceral adiposity contributes to systemic inflammation and is independently associated with the occurrence of CVD, type 2 diabetes and dementia. We suggest that the anti-inflammatory effects of exercise may be mediated via a long-term effect of exercise leading to a reduction in visceral fat mass and/or by induction of anti-inflammatory cytokines with each bout of exercise.
Note on entanglement entropy and regularization in holographic interface theories
NASA Astrophysics Data System (ADS)
Gutperle, Michael; Trivella, Andrea
2017-03-01
We discuss the computation of holographic entanglement entropy for interface conformal field theories. The fact that globally well-defined Fefferman-Graham coordinates are difficult to construct makes the regularization of the holographic theory challenging. We introduce a simple new cutoff procedure, which we call "double cutoff" regularization. We test the new cutoff procedure by comparing the results for holographic entanglement entropies using other cutoff procedures and find agreement. We also study three dimensional conformal field theories with a two dimensional interface. In that case the dual bulk geometry is constructed using warped geometry with an AdS3 factor. We define an effective central charge to the interface through the Brown-Henneaux formula for the AdS3 factor. We investigate two concrete examples, showing that the same effective central charge appears in the computation of entanglement entropy and governs the conformal anomaly.
Existence and Regularity for Dynamic Viscoelastic Adhesive Contact with Damage
Kuttler, Kenneth L. Shillor, Meir Fernandez, Jose R.
2006-01-15
A model for the dynamic process of frictionless adhesive contact between a viscoelastic body and a reactive foundation, which takes into account the damage of the material resulting from tension or compression, is presented. Contact is described by the normal compliance condition. Material damage is modelled by the damage field, which measures the pointwise fractional decrease in the load-carrying capacity of the material, and its evolution is described by a differential inclusion. The model allows for different damage rates caused by tension or compression. The adhesion is modelled by the bonding field, which measures the fraction of active bonds on the contact surface. The existence of the unique weak solution is established using the theory of set-valued pseudomonotone operators introduced by Kuttler and Shillor (1999). Additional regularity of the solution is obtained when the problem data is more regular and satisfies appropriate compatibility conditions.
Statistical regularities in the rank-citation profile of scientists
Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro
2011-01-01
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress. PMID:22355696
Mechanisms of evolution of avalanches in regular graphs.
Handford, Thomas P; Pérez-Reche, Francisco J; Taraskin, Sergei N
2013-06-01
A mapping of avalanches occurring in the zero-temperature random-field Ising model to life periods of a population experiencing immigration is established. Such a mapping allows the microscopic criteria for the occurrence of an infinite avalanche in a q-regular graph to be determined. A key factor for an avalanche of spin flips to become infinite is that it interacts in an optimal way with previously flipped spins. Based on these criteria, we explain why an infinite avalanche can occur in q-regular graphs only for q>3 and suggest that this criterion might be relevant for other systems. The generating function techniques developed for branching processes are applied to obtain analytical expressions for the durations, pulse shapes, and power spectra of the avalanches. The results show that only very long avalanches exhibit a significant degree of universality.
Soft Constraints in Nonlinear Spectral Fitting with Regularized Lineshape Deconvolution
Zhang, Yan; Shen, Jun
2012-01-01
This paper presents a novel method for incorporating a priori knowledge into regularized nonlinear spectral fitting as soft constraints. Regularization was recently introduced to lineshape deconvolution as a method for correcting spectral distortions. Here, the deconvoluted lineshape was described by a new type of lineshape model and applied to spectral fitting. The non-linear spectral fitting was carried out in two steps that were subject to hard constraints and soft constraints, respectively. The hard constraints step provided a starting point and, therefore, only the changes of the relevant variables were constrained in the soft constraints step and incorporated into the linear sub-steps of the Levenberg-Marquardt algorithm. The method was demonstrated using localized averaged echo time point resolved spectroscopy (PRESS) proton spectroscopy of human brains. PMID:22618964
Validity and Regularization of Classical Half-Space Equations
NASA Astrophysics Data System (ADS)
Li, Qin; Lu, Jianfeng; Sun, Weiran
2017-01-01
Recent result (Wu and Guo in Commun Math Phys 336(3):1473-1553, 2015) has shown that over the 2D unit disk, the classical half-space equation (CHS) for the neutron transport does not capture the correct boundary layer behaviour as long believed. In this paper we develop a regularization technique for CHS to any arbitrary order and use its first-order regularization to show that in the case of the 2D unit disk, although CHS misrepresents the boundary layer behaviour, it does give the correct boundary condition for the interior macroscopic (Laplace) equation. Therefore CHS is still a valid equation to recover the correct boundary condition for the interior Laplace equation over the 2D unit disk.
Regularization of inverse photomask synthesis to enhance manufacturability
NASA Astrophysics Data System (ADS)
Jia, Ningning; Wong, Alfred K.; Lam, Edmund Y.
2009-12-01
Mask manufacturability has been considered as a major issue in the adoption of inverse lithography (IL) in practice. With smaller technology nodes, IL distorts the mask pattern more aggressively. The distorted mask often contains curvilinear contour and irregular shapes, which cast a heavy computation burden on segmentation and data preparation. Total variation (TV) has been used for regularization in previous work, but it is not very effective in regulating the mask shape to be rectangular. In this paper, we apply TV regularization not only on the mask image but also on the mask edges, which forces the curves of edges to be more vertical or horizontal, because they give smaller TV values. Except for rectilinearity, a group of geometrical specifications of the mask pattern set by mask manufacture rule control (MRC) is also important for mask manufacturability. To prevent these characteristics from appearing, we also propose an intervention scheme into the optimization framework.
Experimental evidence for formation mechanism of regular circular fringes
NASA Astrophysics Data System (ADS)
Wang, Y.; Zhu, R.; Wang, G.; Wang, P.; Li, H.; Zhang, W.; Ren, G.
2016-10-01
Laser active suppressing jamming is one of the most effective technologies to cope with optoelectric imaging systems. In the process of carrying out laser disturbing experiment, regular circular fringes often appeared on the detector, besides laser spot converging by optical system. First of all, the formation of circular fringes has been experimentally investigated by using a simple converging lens to replace the complex optical system. Moreover, circular fringes have been simulated based on the interference theory of coherent light. The coherence between the experimental phenomena and the simulated results showed that the formation mechanism of regular circular fringes was the interference effect between reflected light by back surface of lens and directly refractive light on the detector. At last, the visibility of circular fringes has been calculated from 0.05 to 0.22 according to the current plating standard of lens surface and manufacture technique of optoelectric detector.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Drop impact upon superhydrophobic surfaces with regular and hierarchical roughness
NASA Astrophysics Data System (ADS)
Lv, Cunjing; Hao, Pengfei; Zhang, Xiwen; He, Feng
2016-04-01
Recent studies demonstrate that roughness and morphologies of the textures play essential roles on the dynamics of water drop impacting onto superhydrophobic substrates. Particularly, significant reduction of contact time has greatly attracted people's attention. We experimentally investigate drop impact dynamics onto three types of superhydrophobic surfaces, consisting of regular micropillars, two-tier textures with nano/micro-scale roughness, and hierarchical textures with random roughness. It shows that the contact time is controlled by the Weber number and the roughness of the surface. Compared with drop impact on regular micropillared surfaces, the contact time can be finely reduced by increasing the Weber number on surfaces with two-tier textures, but can be remarkably reduced on surfaces with hierarchical textures resulting from the prompt splash and fragmentation of liquid lamellae. Our study may shed lights on textured materials fabrication, allowing a rapid drop detachment to realize broad applications.
Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals
NASA Astrophysics Data System (ADS)
Hopper, Christopher P.
2016-10-01
We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.
Compressing Regular Expressions' DFA Table by Matrix Decomposition
NASA Astrophysics Data System (ADS)
Liu, Yanbing; Guo, Li; Liu, Ping; Tan, Jianlong
Recently regular expression matching has become a research focus as a result of the urgent demand for Deep Packet Inspection (DPI) in many network security systems. Deterministic Finite Automaton (DFA), which recognizes a set of regular expressions, is usually adopted to cater to the need for real-time processing of network traffic. However, the huge memory usage of DFA prevents it from being applied even on a medium-sized pattern set. In this article, we propose a matrix decomposition method for DFA table compression. The basic idea of the method is to decompose a DFA table into the sum of a row vector, a column vector and a sparse matrix, all of which cost very little space. Experiments on typical rule sets show that the proposed method significantly reduces the memory usage and still runs at fast searching speed.
Regular Expression-Based Learning for METs Value Extraction
Redd, Douglas; Kuang, Jinqiu; Mohanty, April; Bray, Bruce E.; Zeng-Treitler, Qing
2016-01-01
Functional status as measured by exercise capacity is an important clinical variable in the care of patients with cardiovascular diseases. Exercise capacity is commonly reported in terms of Metabolic Equivalents (METs). In the medical records, METs can often be found in a variety of clinical notes. To extract METs values, we adapted a machine-learning algorithm called REDEx to automatically generate regular expressions. Trained and tested on a set of 2701 manually annotated text snippets (i.e. short pieces of text), the regular expressions were able to achieve good accuracy and F-measure of 0.89 and 0.86. This extraction tool will allow us to process the notes of millions of cardiovascular patients and extract METs value for use by researchers and clinicians. PMID:27570673
Chaos at Uranus Spreads Dust Across the Regular Satellites
NASA Astrophysics Data System (ADS)
Tamayo, Dan; Burns, J. A.; Nicholson, P. D.; Hamilton, D. P.
2012-05-01
The short collision timescales between the Uranian irregular satellites argue for the past generation of vast quantities of dust at the outer reaches of Uranus’ Hill sphere (Bottke et al. 2010). Uranus’ extreme obliquity (98 degrees) renders the orbits of large objects unstable to eccentricity perturbations in the radial range a ≈ 60 - 75 Rp. (Tremaine et al. 2009). We study the effect on dust by investigating how the instability is modified by radiation pressure. We find that dust particles generated at the orbits of the irregular satellites move inward as radiation forces cause their orbits to decay (Burns et al. 1979). When they reach the unstable region, grain orbits undergo chaotic large-amplitude eccentricity oscillations that bring their pericenters inside the orbits of the regular satellites. We argue that the impact probabilities and expected spatial distribution across the satellite surfaces might explain the observed hemispherical color asymmetries common to the outer four regular satellites.
Regularization of hidden dynamics in piecewise smooth flows
NASA Astrophysics Data System (ADS)
Novaes, Douglas D.; Jeffrey, Mike R.
2015-11-01
This paper studies the equivalence between differentiable and non-differentiable dynamics in Rn. Filippov's theory of discontinuous differential equations allows us to find flow solutions of dynamical systems whose vector fields undergo switches at thresholds in phase space. The canonical convex combination at the discontinuity is only the linear part of a nonlinear combination that more fully explores Filippov's most general problem: the differential inclusion. Here we show how recent work relating discontinuous systems to singular limits of continuous (or regularized) systems extends to nonlinear combinations. We show that if sliding occurs in a discontinuous systems, there exists a differentiable slow-fast system with equivalent slow invariant dynamics. We also show the corresponding result for the pinching method, a converse to regularization which approximates a smooth system by a discontinuous one.
Gevrey regularity for the supercritical quasi-geostrophic equation
NASA Astrophysics Data System (ADS)
Biswas, Animikh
2014-09-01
In this paper, following the techniques of Foias and Temam, we establish suitable Gevrey class regularity of solutions to the supercritical quasi-geostrophic equations in the whole space, with initial data in “critical” Sobolev spaces. Moreover, the Gevrey class that we obtain is “near optimal” and as a corollary, we obtain temporal decay rates of higher order Sobolev norms of the solutions. Unlike the Navier-Stokes or the subcritical quasi-geostrophic equations, the low dissipation poses a difficulty in establishing Gevrey regularity. A new commutator estimate in Gevrey classes, involving the dyadic Littlewood-Paley operators, is established that allow us to exploit the cancellation properties of the equation and circumvent this difficulty.
Resolving intravoxel fiber architecture using nonconvex regularized blind compressed sensing
NASA Astrophysics Data System (ADS)
Chu, C. Y.; Huang, J. P.; Sun, C. Y.; Liu, W. Y.; Zhu, Y. M.
2015-03-01
In diffusion magnetic resonance imaging, accurate and reliable estimation of intravoxel fiber architectures is a major prerequisite for tractography algorithms or any other derived statistical analysis. Several methods have been proposed that estimate intravoxel fiber architectures using low angular resolution acquisitions owing to their shorter acquisition time and relatively low b-values. But these methods are highly sensitive to noise. In this work, we propose a nonconvex regularized blind compressed sensing approach to estimate intravoxel fiber architectures in low angular resolution acquisitions. The method models diffusion-weighted (DW) signals as a sparse linear combination of unfixed reconstruction basis functions and introduces a nonconvex regularizer to enhance the noise immunity. We present a general solving framework to simultaneously estimate the sparse coefficients and the reconstruction basis. Experiments on synthetic, phantom, and real human brain DW images demonstrate the superiority of the proposed approach.
L1-Regularized Boltzmann Machine Learning Using Majorizer Minimization
NASA Astrophysics Data System (ADS)
Ohzeki, Masayuki
2015-05-01
We propose an inference method to estimate sparse interactions and biases according to Boltzmann machine learning. The basis of this method is L1 regularization, which is often used in compressed sensing, a technique for reconstructing sparse input signals from undersampled outputs. L1 regularization impedes the simple application of the gradient method, which optimizes the cost function that leads to accurate estimations, owing to the cost function's lack of smoothness. In this study, we utilize the majorizer minimization method, which is a well-known technique implemented in optimization problems, to avoid the non-smoothness of the cost function. By using the majorizer minimization method, we elucidate essentially relevant biases and interactions from given data with seemingly strongly-correlated components.
Spatially adaptive regularized iterative high-resolution image reconstruction algorithm
NASA Astrophysics Data System (ADS)
Lim, Won Bae; Park, Min K.; Kang, Moon Gi
2000-12-01
High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The
Regularization of identity based solution in string field theory
NASA Astrophysics Data System (ADS)
Zeze, Syoji
2010-10-01
We demonstrate that an Erler-Schnabl type solution in cubic string field theory can be naturally interpreted as a gauge invariant regularization of an identity based solution. We consider a solution which interpolates between an identity based solution and ordinary Erler-Schnabl one. Two gauge invariant quantities, the classical action and the closed string tadpole, are evaluated for finite value of the gauge parameter. It is explicitly checked that both of them are independent of the gauge parameter.
Regular Scanning Tunneling Microscope Tips can be Intrinsically Chiral
Tierney, Heather L.; Murphy, Colin J.; Sykes, E. Charles H.
2011-01-07
We report our discovery that regular scanning tunneling microscope tips can themselves be chiral. This chirality leads to differences in electron tunneling efficiencies through left- and right-handed molecules, and, when using the tip to electrically excite molecular rotation, large differences in rotation rate were observed which correlated with molecular chirality. As scanning tunneling microscopy is a widely used technique, this result may have unforeseen consequences for the measurement of asymmetric surface phenomena in a variety of important fields.
On c_2 invariants of some 4-regular Feynman graphs
NASA Astrophysics Data System (ADS)
Doryn, Dmitry
2017-03-01
The obstruction for application of techniques like denominator reduction for the computation of the c_2 invariant of Feynman graphs in general is the absence of a 3-valent vertex. In this paper such a formula for a 4-valent vertex is derived. The formula allows us to compute the c_2 invariant of new graphs, for instance, some 4-regular graphs with small loop number.
Regular analgesic use and risk of multiple myeloma.
Moysich, Kirsten B; Bonner, Mathew R; Beehler, Gregory P; Marshall, James R; Menezes, Ravi J; Baker, Julie A; Weiss, Joli R; Chanan-Khan, Asher
2007-04-01
Analgesic use has been implicated in the chemoprevention of a number of solid tumors, but to date no previous research has focused on the role of analgesics in the etiology of multiple myeloma (MM). We conducted a hospital-based case-control study of 117 patients with primary, incident MM and 483 age and residence matched controls without benign or malignant neoplasms. All participants received medical services at Roswell Park Cancer Institute in Buffalo, NY, and completed a comprehensive epidemiological questionnaire. Participants who reported analgesic use at least once a week for at least 6 months were classified as regular users; individuals who did not use analgesics regularly served as the reference group throughout the analyses. We used unconditional logistic regression analyses to compute crude and adjusted odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Compared to non-users, regular aspirin users were not at reduced risk of MM (adjusted OR=0.99; 95% CI 0.65-1.49), nor were participants with the highest frequency or duration of aspirin use. A significant risk elevation was found for participants who were regular acetaminophen users (adjusted OR=2.95; 95% CI 1.72-5.08). Further, marked increases in risk of MM were noted with both greater frequency (>7 tablets weekly; adjusted OR=4.36; 95% CI 1.70-11.2) and greater duration (>10 years; adjusted OR=3.26; 95% CI 1.52-7.02) of acetaminophen use. We observed no evidence of a chemoprotective effect of aspirin on MM risk, but observed significant risk elevations with various measures of acetaminophen use. Our results warrant further investigation in population-based case-control and cohort studies and should be interpreted with caution in light of the limited sample size and biases inherent in hospital-based studies.
Knowing More than One Can Say: The Early Regular Plural
ERIC Educational Resources Information Center
Zapf, Jennifer A.; Smith, Linda B.
2009-01-01
This paper reports on partial knowledge in two-year-old children's learning of the regular English plural. In Experiments 1 and 2, children were presented with one kind and its label and then were either presented with two of that same kind (A[right arrow]AA) or the initial picture next to a very different thing (A[right arrow]AB). The children in…
[Iterated Tikhonov Regularization for Spectral Recovery from Tristimulus].
Xie, De-hong; Li, Rui; Wan, Xiao-xia; Liu, Qiang; Zhu, Wen-feng
2016-01-01
Reflective spectra in a multispectral image can objectively and originally represent color information due to their high dimensionality, illuminant independent and device independent. Aiming to the problem of loss of spectral information when the spectral data reconstructed from three-dimensional colorimetric data in the trichromatic camera-based spectral image acquisition system and its subsequent problem of loss of color information, this work proposes an iterated Tikhonov regularization to reconstruct the reflectance spectra. First of all, according to relationship between the colorimetric value and the reflective spectra in the colorimetric theory, this work constructs a spectral reconstruction equation which can reconstruct high dimensional spectral data from three dimensional colorimetric data acquired by the trichromatic camera. Then, the iterated Tikhonov regularization, inspired by the idea of the pseudo inverse Moore-Penrose, is used to cope with the linear ill-posed inverse problem during solving the equation of reconstructing reflectance spectra. Meanwhile, the work also uses the L-curve method to obtain an optimal regularized parameter of the iterated Tikhonov regularization by training a set of samples. Through these methods, the ill condition of the spectral reconstruction equation can be effectively controlled and improved, and subsequently loss of spectral information of the reconstructed spectral data can be reduced. The verification experiment is performed under another set of training samples. The experimental results show that the proposed method reconstructs the reflective spectra with less spectral information loss in the trichromatic camera-based spectral image acquisition system, which reflects in obvious decreases of spectral errors and colorimetric errors compared with the previous method.
[Structural regularities in activated cleavage sites of thrombin receptors].
Mikhaĭlik, I V; Verevka, S V
1999-01-01
Comparison of thrombin receptors activation splitting sites sequences testifies to their similarity both in activation splitting sites of protein precursors and protein proteinase inhibitors reactive sites. In all these sites corresponded to effectory sites P2'-positions are placed by hydrophobic amino-acids only. The regularity defined conforms with previous thesis about the role of effectory S2'-site in regulation of the processes mediated by serine proteinases.
Evolution of binary supermassive black holes via chain regularization.
Szell, Andras; Merritt, David; Mikkola, Seppo
2005-06-01
A chain regularization method is combined with special purpose computer hardware to study the evolution of massive black hole binaries at the centers of galaxies. Preliminary results with up to N = 0.26 x 10(6) particles are presented. The decay rate of the binary is shown to decrease with increasing N, as expected on the basis of theoretical arguments. The eccentricity of the binary remains small.
Yosida-Moreau Regularization of Sweeping Processes with Unbounded Variation
NASA Astrophysics Data System (ADS)
Kunze, M.; Monteiro Marques, M. D. P.
1996-09-01
Lett↦C(t) be a Hausdorff-continuous multifunction with closed convex values in a Hilbert spaceHsuch thatC(t) has nonempty interior for allt. We show that the Yosida-Moreau regularizations of the sweeping process with moving setC(t), i.e., the solutions of[formula]are strongly pointwisely convergent asλ→0+to the solution of the corresponding sweeping process, formally written as[formula
Velocity Averaging, Kinetic Formulations and Regularizing Effects in Quasilinear PDEs
2005-10-31
nonlinear conservation laws. In [LPT94a], Lions, Perthame & Tadmor have shown that entropy solutions of such laws admit a regularizing effect of a fractional...one augments (1.1) with additional conditions on the behavior of Φ(ρ) for a large enough family of entropies Φ’s. These additional entropy conditions...imply that g is in fact a positive distribution, g = m ∈ M+, measuring the entropy dissipation of the nonlinear equation. We arrive at the kinetic
Effects of registration regularization and atlas sharpness on segmentation accuracy.
Yeo, B T Thomas; Sabuncu, Mert R; Desikan, Rahul; Fischl, Bruce; Golland, Polina
2008-10-01
In non-rigid registration, the tradeoff between warp regularization and image fidelity is typically determined empirically. In atlas-based segmentation, this leads to a probabilistic atlas of arbitrary sharpness: weak regularization results in well-aligned training images and a sharp atlas; strong regularization yields a "blurry" atlas. In this paper, we employ a generative model for the joint registration and segmentation of images. The atlas construction process arises naturally as estimation of the model parameters. This framework allows the computation of unbiased atlases from manually labeled data at various degrees of "sharpness", as well as the joint registration and segmentation of a novel brain in a consistent manner. We study the effects of the tradeoff of atlas sharpness and warp smoothness in the context of cortical surface parcellation. This is an important question because of the increasingly availability of atlases in public databases, and the development of registration algorithms separate from the atlas construction process. We find that the optimal segmentation (parcellation) corresponds to a unique balance of atlas sharpness and warp regularization, yielding statistically significant improvements over the FreeSurfer parcellation algorithm. Furthermore, we conclude that one can simply use a single atlas computed at an optimal sharpness for the registration-segmentation of a new subject with a pre-determined, fixed, optimal warp constraint. The optimal atlas sharpness and warp smoothness can be determined by probing the segmentation performance on available training data. Our experiments also suggest that segmentation accuracy is tolerant up to a small mismatch between atlas sharpness and warp smoothness.
Exploiting Lexical Regularities in Designing Natural Language Systems.
1988-04-01
ELEMENT. PROJECT. TASKN Artificial Inteligence Laboratory A1A4WR NTumet 0) 545 Technology Square Cambridge, MA 02139 Ln *t- CONTROLLING OFFICE NAME AND...RO-RI95 922 EXPLOITING LEXICAL REGULARITIES IN DESIGNING NATURAL 1/1 LANGUAGE SYSTENS(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE...oes.ary and ftdou.Ip hr Nl wow" L,2This paper presents the lexical component of the START Question Answering system developed at the MIT Artificial
Manifestly scale-invariant regularization and quantum effective operators
NASA Astrophysics Data System (ADS)
Ghilencea, D. M.
2016-05-01
Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.
Effects of regular exercise training on skeletal muscle contractile function
NASA Technical Reports Server (NTRS)
Fitts, Robert H.
2003-01-01
Skeletal muscle function is critical to movement and one's ability to perform daily tasks, such as eating and walking. One objective of this article is to review the contractile properties of fast and slow skeletal muscle and single fibers, with particular emphasis on the cellular events that control or rate limit the important mechanical properties. Another important goal of this article is to present the current understanding of how the contractile properties of limb skeletal muscle adapt to programs of regular exercise.
Regularization of zero-range effective interactions in finite nuclei
NASA Astrophysics Data System (ADS)
Brenna, Marco; Colò, Gianluca; Roca-Maza, Xavier
2014-10-01
The problem of the divergences which arise in beyond mean-field calculations, when a zero-range effective interaction is employed, has not been much considered so far. Some of us have proposed, quite recently, a scheme to regularize a zero-range Skyrme-type force when it is employed to calculate the total energy, at second-order perturbation theory level, in uniform matter. Although this scheme looked promising, the extension for finite nuclei is not straightforward. We introduce such a procedure in the current paper, by proposing a regularization procedure that is similar, in spirit, to the one employed to extract the so-called Vlow-k from the bare force. Although this has been suggested already by B. G. Carlsson and collaborators, the novelty of our work consists of setting on equal footing uniform matter and finite nuclei; in particular, we show how the interactions that have been regularized in uniform matter behave when they are used in a finite nucleus with the corresponding cutoff. We also address the problem of the validity of the perturbative approach in finite nuclei for the total energy.
Diazepam tolerance: effect of age, regular sedation, and alcohol.
Cook, P J; Flanagan, R; James, I M
1984-01-01
The dose of intravenous diazepam required for sedation was estimated in a series of 78 patients aged 17-85 years given the drug for dental and endoscopic procedures. Multiple regression analysis showed a significant correlation (r = 0.71; p less than 0.001) between dose and age, body weight, the taking of regular sedation, and the taking of more than 40 g alcohol daily, but there were no differences in the doses required between men and women, smokers and non-smokers, inpatients and outpatients, or dental and endoscopy patients. Patients aged 80 required an average dose of 10 mg and patients aged 20 an average dose of 30 mg, and the dose required was much higher in those receiving regular sedation or having a high alcohol intake. Plasma total and free diazepam concentrations were measured in the second half of the series of patients (n = 37). Plasma concentrations required for sedation fell twofold to threefold between the ages of 20 and 80 and were significantly higher in those taking regular sedation or alcohol. Differences in the acute response to diazepam appeared to be due to differences in the sensitivity of the central nervous system (pharmacodynamic tolerance) rather than to differences in pharmacokinetic factors. PMID:6432093
Hessian-regularized co-training for social activity recognition.
Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang
2014-01-01
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.
Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.
Liu, Kui; Tan, Jieqing; Ai, Liefu
2016-01-01
To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Channeling power across ecological systems: social regularities in community organizing.
Christens, Brian D; Inzeo, Paula Tran; Faust, Victoria
2014-06-01
Relational and social network perspectives provide opportunities for more holistic conceptualizations of phenomena of interest in community psychology, including power and empowerment. In this article, we apply these tools to build on multilevel frameworks of empowerment by proposing that networks of relationships between individuals constitute the connective spaces between ecological systems. Drawing on an example of a model for grassroots community organizing practiced by WISDOM—a statewide federation supporting local community organizing initiatives in Wisconsin—we identify social regularities (i.e., relational and temporal patterns) that promote empowerment and the development and exercise of social power through building and altering relational ties. Through an emphasis on listening-focused one-to-one meetings, reflection, and social analysis, WISDOM organizing initiatives construct and reinforce social regularities that develop social power in the organizing initiatives and advance psychological empowerment among participant leaders in organizing. These patterns are established by organizationally driven brokerage and mobilization of interpersonal ties, some of which span ecological systems.Hence, elements of these power-focused social regularities can be conceptualized as cross-system channels through which micro-level empowerment processes feed into macro-level exercise of social power, and vice versa. We describe examples of these channels in action, and offer recommendations for theory and design of future action research [corrected] .
Novel regularized sparse model for fluorescence molecular tomography reconstruction
NASA Astrophysics Data System (ADS)
Liu, Yuhao; Liu, Jie; An, Yu; Jiang, Shixin
2017-01-01
Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in small animals. FMT has been used in surgical navigation for tumor resection and has many potential applications at the physiological, metabolic, and molecular levels in tissues. The hybrid system combined FMT and X-ray computed tomography (XCT) was pursued for accurate detection. However, the result is usually over-smoothed and over-shrunk. In this paper, we propose a region reconstruction method for FMT in which the elastic net (E-net) regularization is used to combine L1-norm and L2-norm. The E-net penalty corresponds to adding the L1-norm penalty and a L2-norm penalty. Elastic net combines the advantages of L1-norm regularization and L2-norm regularization. It could achieve the balance between the sparsity and smooth by simultaneously employing the L1-norm and the L2-norm. To solve the problem effectively, the proximal gradient algorithms was used to accelerate the computation. To evaluate the performance of the proposed E-net method, numerical phantom experiments are conducted. The simulation study shows that the proposed method achieves accurate and is able to reconstruct image effectively.
Path integral regularization of pure Yang-Mills theory
Jacquot, J. L.
2009-07-15
In enlarging the field content of pure Yang-Mills theory to a cutoff dependent matrix valued complex scalar field, we construct a vectorial operator, which is by definition invariant with respect to the gauge transformation of the Yang-Mills field and with respect to a Stueckelberg type gauge transformation of the scalar field. This invariant operator converges to the original Yang-Mills field as the cutoff goes to infinity. With the help of cutoff functions, we construct with this invariant a regularized action for the pure Yang-Mills theory. In order to be able to define both the gauge and scalar fields kinetic terms, other invariant terms are added to the action. Since the scalar fields flat measure is invariant under the Stueckelberg type gauge transformation, we obtain a regularized gauge-invariant path integral for pure Yang-Mills theory that is mathematically well defined. Moreover, the regularized Ward-Takahashi identities describing the dynamics of the gauge fields are exactly the same as the formal Ward-Takahashi identities of the unregularized theory.
Manifold regularized non-negative matrix factorization with label information
NASA Astrophysics Data System (ADS)
Li, Huirong; Zhang, Jiangshe; Wang, Changpeng; Liu, Junmin
2016-03-01
Non-negative matrix factorization (NMF) as a popular technique for finding parts-based, linear representations of non-negative data has been successfully applied in a wide range of applications, such as feature learning, dictionary learning, and dimensionality reduction. However, both the local manifold regularization of data and the discriminative information of the available label have not been taken into account together in NMF. We propose a new semisupervised matrix decomposition method, called manifold regularized non-negative matrix factorization (MRNMF) with label information, which incorporates the manifold regularization and the label information into the NMF to improve the performance of NMF in clustering tasks. We encode the local geometrical structure of the data space by constructing a nearest neighbor graph and enhance the discriminative ability of different classes by effectively using the label information. Experimental comparisons with the state-of-the-art methods on theCOIL20, PIE, Extended Yale B, and MNIST databases demonstrate the effectiveness of MRNMF.
Manifold regularized multitask feature learning for multimodality disease classification.
Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang
2015-02-01
Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis.
Enhanced manifold regularization for semi-supervised classification.
Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong
2016-06-01
Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.
Isotropic model for cluster growth on a regular lattice
NASA Astrophysics Data System (ADS)
Yates, Christian A.; Baker, Ruth E.
2013-08-01
There exists a plethora of mathematical models for cluster growth and/or aggregation on regular lattices. Almost all suffer from inherent anisotropy caused by the regular lattice upon which they are grown. We analyze the little-known model for stochastic cluster growth on a regular lattice first introduced by Ferreira Jr. and Alves [J. Stat. Mech. Theo. & Exp.1742-546810.1088/1742-5468/2006/11/P11007 (2006) P11007], which produces circular clusters with no discernible anisotropy. We demonstrate that even in the noise-reduced limit the clusters remain circular. We adapt the model by introducing a specific rearrangement algorithm so that, rather than adding elements to the cluster from the outside (corresponding to apical growth), our model uses mitosis-like cell splitting events to increase the cluster size. We analyze the surface scaling properties of our model and compare it to the behavior of more traditional models. In “1+1” dimensions we discover and explore a new, nonmonotonic surface thickness scaling relationship which differs significantly from the Family-Vicsek scaling relationship. This suggests that, for models whose clusters do not grow through particle additions which are solely dependent on surface considerations, the traditional classification into “universality classes” may not be appropriate.
Matching effective chiral Lagrangians with dimensional and lattice regularizations
NASA Astrophysics Data System (ADS)
Niedermayer, F.; Weisz, P.
2016-04-01
We compute the free energy in the presence of a chemical potential coupled to a conserved charge in effective O( n) scalar field theory (without explicit symmetry breaking terms) to third order for asymmetric volumes in general d-dimensions, using dimensional (DR) and lattice regularizations. This yields relations between the 4-derivative couplings appearing in the effective actions for the two regularizations, which in turn allows us to translate results, e.g. the mass gap in a finite periodic box in d = 3 + 1 dimensions, from one regularization to the other. Consistency is found with a new direct computation of the mass gap using DR. For the case n = 4 , d = 4 the model is the low-energy effective theory of QCD with N f = 2 massless quarks. The results can thus be used to obtain estimates of low energy constants in the effective chiral Lagrangian from measurements of the low energy observables, including the low lying spectrum of N f = 2 QCD in the δ-regime using lattice simulations, as proposed by Peter Hasenfratz, or from the susceptibility corresponding to the chemical potential used.
SAR image regularization with fast approximate discrete minimization.
Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc
2009-07-01
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
Global Regularization Method for Planar Restricted Three-body Problem
NASA Astrophysics Data System (ADS)
Sharaf, M. A.; Dwidar, H. R.
2015-12-01
In this paper, global regularization method for planar restricted three-body problem is purposed by using the transformation z=x+iy=ν cos n(u+iv), where i=√{-1}, 0 < ν ≤ 1 and n is a positive integer. The method is developed analytically and computationally. For the analytical developments, analytical solutions in power series of the pseudo-time τ are obtained for positions and velocities (u,v,u',v') and (x,y,dot{x},dot{y}) in both regularized and physical planes respectively, the physical time {t} is also obtained as power series in τ. Moreover, relations between the coefficients of the power series are obtained for two consequent values of {n}. Also, we developed analytical solutions in power series form for the inverse problem of finding τ in terms of {t}. As typical examples, three symbolic expressions for the coefficients of the power series were developed in terms of the initial values. As to the computational developments, the global regularized equations of motion are developed together with their initial values in forms suitable for digital computations using any differential equations solver. On the other hand, for the numerical evolutions of power series, an efficient method depending on the continued fraction theory is provided.
Interior Regularity Estimates in High Conductivity Homogenization and Application
NASA Astrophysics Data System (ADS)
Briane, Marc; Capdeboscq, Yves; Nguyen, Luc
2013-01-01
In this paper, uniform pointwise regularity estimates for the solutions of conductivity equations are obtained in a unit conductivity medium reinforced by an ɛ-periodic lattice of highly conducting thin rods. The estimates are derived only at a distance ɛ 1+ τ (for some τ > 0) away from the fibres. This distance constraint is rather sharp since the gradients of the solutions are shown to be unbounded locally in L p as soon as p > 2. One key ingredient is the derivation in dimension two of regularity estimates to the solutions of the equations deduced from a Fourier series expansion with respect to the fibres' direction, and weighted by the high-contrast conductivity. The dependence on powers of ɛ of these two-dimensional estimates is shown to be sharp. The initial motivation for this work comes from imaging, and enhanced resolution phenomena observed experimentally in the presence of micro-structures (L erosey et al., Science 315:1120-1124, 2007). We use these regularity estimates to characterize the signature of low volume fraction heterogeneities in the fibred reinforced medium, assuming that the heterogeneities stay at a distance ɛ 1+ τ away from the fibres.
Hessian-Regularized Co-Training for Social Activity Recognition
Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang
2014-01-01
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms. PMID:25259945
Regularity and predictability of human mobility in personal space.
Austin, Daniel; Cross, Robin M; Hayes, Tamara; Kaye, Jeffrey
2014-01-01
Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity) is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.
On minimal energy dipole moment distributions in regular polygonal agglomerates
NASA Astrophysics Data System (ADS)
Rosa, Adriano Possebon; Cunha, Francisco Ricardo; Ceniceros, Hector Daniel
2017-01-01
Static, regular polygonal and close-packed clusters of spherical magnetic particles and their energy-minimizing magnetic moments are investigated in a two-dimensional setting. This study focuses on a simple particle system which is solely described by the dipole-dipole interaction energy, both without and in the presence of an in-plane magnetic field. For a regular polygonal structure of n sides with n ≥ 3 , and in the absence of an external field, it is proved rigorously that the magnetic moments given by the roots of unity, i.e. tangential to the polygon, are a minimizer of the dipole-dipole interaction energy. Also, for zero external field, new multiple local minima are discovered for the regular polygonal agglomerates. The number of found local extrema is proportional to [ n / 2 ] and these critical points are characterized by the presence of a pair of magnetic moments with a large deviation from the tangential configuration and whose particles are at least three diameters apart. The changes induced by an in-plane external magnetic field on the minimal energy, tangential configurations are investigated numerically. The two critical fields, which correspond to a crossover with the linear chain minimal energy and with the break-up of the agglomerate, respectively are examined in detail. In particular, the numerical results are compared directly with the asymptotic formulas of Danilov et al. (2012) [23] and a remarkable agreement is found even for moderate to large fields. Finally, three examples of close-packed structures are investigated: a triangle, a centered hexagon, and a 19-particle close packed cluster. The numerical study reveals novel, illuminating characteristics of these compact clusters often seen in ferrofluids. The centered hexagon is energetically favorable to the regular hexagon and the minimal energy for the larger 19-particle cluster is even lower than that of the close packed hexagon. In addition, this larger close packed agglomerate has two
Regular biorthogonal pairs and pseudo-bosonic operators
NASA Astrophysics Data System (ADS)
Inoue, H.; Takakura, M.
2016-08-01
The first purpose of this paper is to show a method of constructing a regular biorthogonal pair based on the commutation rule: ab - ba = I for a pair of operators a and b acting on a Hilbert space H with inner product (ṡ| ṡ ). Here, sequences {ϕn} and {ψn} in a Hilbert space H are biorthogonal if (ϕn|ψm) = δnm, n, m = 0, 1, …, and they are regular if both Dϕ ≡ Span{ϕn} and Dψ ≡ Span{ψn} are dense in H . Indeed, the assumptions to construct the regular biorthogonal pair coincide with the definition of pseudo-bosons as originally given in F. Bagarello ["Pseudobosons, Riesz bases, and coherent states," J. Math. Phys. 51, 023531 (2010)]. Furthermore, we study the connections between the pseudo-bosonic operators a, b, a†, b† and the pseudo-bosonic operators defined by a regular biorthogonal pair ({ϕn}, {ψn}) and an ONB e of H in H. Inoue ["General theory of regular biorthogonal pairs and its physical applications," e-print arXiv:math-ph/1604.01967]. The second purpose is to define and study the notion of D -pseudo-bosons in F. Bagarello ["More mathematics for pseudo-bosons," J. Math. Phys. 54, 063512 (2013)] and F. Bagarello ["From self-adjoint to non self-adjoint harmonic oscillators: Physical consequences and mathematical pitfalls," Phys. Rev. A 88, 032120 (2013)] and give a method of constructing D -pseudo-bosons on some steps. Then it is shown that for any ONB e = {en} in H and any operators T and T-1 in L † ( D ) , we may construct operators A and B satisfying D -pseudo bosons, where D is a dense subspace in a Hilbert space H and L † ( D ) the set of all linear operators T from D to D such that T * D ⊂ D , where T* is the adjoint of T. Finally, we give some physical examples of D -pseudo-bosons based on standard bosons by the method of constructing D -pseudo-bosons stated above.
Factors associated with regular dental visits among hemodialysis patients
Yoshioka, Masami; Shirayama, Yasuhiko; Imoto, Issei; Hinode, Daisuke; Yanagisawa, Shizuko; Takeuchi, Yuko; Bando, Takashi; Yokota, Narushi
2016-01-01
AIM To investigate awareness and attitudes about preventive dental visits among dialysis patients; to clarify the barriers to visiting the dentist. METHODS Subjects included 141 dentate outpatients receiving hemodialysis treatment at two facilities, one with a dental department and the other without a dental department. We used a structured questionnaire to interview participants about their awareness of oral health management issues for dialysis patients, perceived oral symptoms and attitudes about dental visits. Bivariate analysis using the χ2 test was conducted to determine associations between study variables and regular dental check-ups. Binominal logistic regression analysis was used to determine factors associated with regular dental check-ups. RESULTS There were no significant differences in patient demographics between the two participating facilities, including attitudes about dental visits. Therefore, we included all patients in the following analyses. Few patients (4.3%) had been referred to a dentist by a medical doctor or nurse. Although 80.9% of subjects had a primary dentist, only 34.0% of subjects received regular dental check-ups. The most common reasons cited for not seeking dental care were that visits are burdensome and a lack of perceived need. Patients with gum swelling or bleeding were much more likely to be in the group of those not receiving routine dental check-ups (χ2 test, P < 0.01). Logistic regression analysis demonstrated that receiving dental check-ups was associated with awareness that oral health management is more important for dialysis patients than for others and with having a primary dentist (P < 0.05). CONCLUSION Dialysis patients should be educated about the importance of preventive dental care. Medical providers are expected to participate in promoting dental visits among dialysis patients. PMID:27648409
Quantification of fetal heart rate regularity using symbolic dynamics
NASA Astrophysics Data System (ADS)
van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.
2007-03-01
Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to
Iterative reconstruction for bioluminescence tomography with total variation regularization
NASA Astrophysics Data System (ADS)
Jin, Wenma; He, Yonghong
2012-12-01
Bioluminescence tomography(BLT) is an instrumental molecular imaging modality designed for the 3D location and quantification of bioluminescent sources distribution in vivo. In our context, the diffusion approximation(DA) to radiative transfer equation(RTE) is utilized to model the forward process of light propagation. Mathematically, the solution uniqueness does not hold for DA-based BLT which is an inverse source problem of partial differential equations and hence is highly ill-posed. In the current work, we concentrate on a general regularization framework for BLT with Bregman distance as data fidelity and total variation(TV) as regularization. Two specializations of the Bregman distance, the least squares(LS) distance and Kullback-Leibler(KL) divergence, which correspond to the Gaussian and Poisson environments respectively, are demonstrated and the resulting regularization problems are denoted as LS+TV and KL+TV. Based on the constrained Landweber(CL) scheme and expectation maximization(EM) algorithm for BLT, iterative algorithms for the LS+TV and KL+TV problems in the context of BLT are developed, which are denoted as CL-TV and EM-TV respectively. They are both essentially gradient-based algorithms alternatingly performing the standard CL or EM iteration step and the TV correction step which requires the solution of a weighted ROF model. Chambolle's duality-based approach is adapted and extended to solving the weighted ROF subproblem. Numerical experiments for a 3D heterogeneous mouse phantom are carried out and preliminary results are reported to verify and evaluate the proposed algorithms. It is found that for piecewise-constant sources both CL-TV and EM-TV outperform the conventional CL and EM algorithms for BLT.
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study.
Auditory feedback in error-based learning of motor regularity.
van Vugt, Floris T; Tillmann, Barbara
2015-05-05
Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications.
A regularization of the carbon cycle data-fusion problem
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2013-04-01
Improving our understanding of the carbon cycle is an important component of modelling climate and the Earth system, and a variety of data assimilation techniques have been used to combine process models with different types of observational data. Here, we carry out a careful mathematical analysis on a simple, yet generic, version of the carbon allocation inverse problem. At the heart of a Bayesian approach to data-model fusion is the following problem: given a generalized observation operator H, and observations y, determine the model state x that minimizes |Hx - y| in a given norm. Such a problem is well-posed if a unique solution x = H-1y exists, and if the inverse of H is continuous. However, in discrete models such a problem can be ill-conditioned, and hence ill-posed, when the singular values of H decay to zero. Our analysis is carried out on the evergreen version of the Data Assimilation-Linked Ecosystem model (DALEC EV). DALEC EV depicts a forest ecosystem as a set of five carbon pools: the gross primary production (GPP) is calculated at a daily time step as a function of the foliar carbon and meteorological drivers, following a mass conservation principle the GPP is then entirely allocated to carbon pools and respiration via fluxes. While this model is very simple, it represents the basic processes simulated by more sophisticated models of the carbon cycle and the low dimension of the state variable (five carbon pools and eleven parameters) allows direct solution using otherwise hopeless methods. Using synthetic observations of net ecosystem exchange (NEE), defined as the difference between GPP and respirations, we study the conditioning of the inverse problem. We found that the generalized observation operator is ill-conditioned and we study the impact of various regularization techniques: generalized Tikhonov regularization, total least square etc. Finally we use the formalism of control theory to apply model reduction techniques to the regularization
1. PLAN OF MOXHAM, JOHNSTOWN, PENNA. ALL REGULAR LOTS 40 ...
1. PLAN OF MOXHAM, JOHNSTOWN, PENNA. ALL REGULAR LOTS 40 FT BY 120 FT. TRACED FROM DRAWING 10742 (dated February 1, 1892). THE JOHNSON COMPANY, SCALE 1 INCH - 160 FT, SEPT. 19TH 1898. DRAWING NUMBER 29781. Original plan for the Town of Moxham drafted in 1887-88, company archives contain several revised blueprints of the original plan. This revision reflects the subdivision of the Von Lunch Grove into residential lots, but still indicates the 'Moxham Block' on which the original Moxham Estate was built in 1888-89. (Photograph of drawing held at the Johnstown Corporation General Office, Johnstown, PA) - Borough of Moxham, Johnstown, Cambria County, PA
On the low regularity of the Benney-Lin equation
NASA Astrophysics Data System (ADS)
Chen, Wengu; Li, Junfeng
2008-03-01
We consider the low regularity of the Benney-Lin equation ut+uux+uxxx+[beta](uxx+uxxxx)+[eta]uxxxxx=0. We established the global well posedness for the initial value problem of Benney-Lin equation in the Sobolev spaces for 0[greater-or-equal, slanted]s>-2, improving the well-posedness result of Biagioni and Linares [H.A. Biaginoi, F. Linares, On the Benney-Lin and Kawahara equation, J. Math. Anal. Appl. 211 (1997) 131-152]. For s<-2 we also prove some ill-posedness issues.
Deterministic regularization of three-dimensional optical diffraction tomography
Sung, Yongjin; Dasari, Ramachandra R.
2012-01-01
In this paper we discuss a deterministic regularization algorithm to handle the missing cone problem of three-dimensional optical diffraction tomography (ODT). The missing cone problem arises in most practical applications of ODT and is responsible for elongation of the reconstructed shape and underestimation of the value of the refractive index. By applying positivity and piecewise-smoothness constraints in an iterative reconstruction framework, we effectively suppress the missing cone artifact and recover sharp edges rounded out by the missing cone, and we significantly improve the accuracy of the predictions of the refractive index. We also show the noise handling capability of our algorithm in the reconstruction process. PMID:21811316
Ideality contours and thermodynamic regularities in supercritical molecular fluids
NASA Astrophysics Data System (ADS)
Desgranges, Caroline; Margo, Abigail; Delhommelle, Jerome
2016-08-01
Using Expanded Wang-Landau simulations, we calculate the ideality contours for 3 molecular fluids (SF6, CO2 and H2O). We analyze how the increase in polarity, and thus, in the strength of the intermolecular interactions, impacts the contours and thermodynamic regularities. This effect results in the increase in the Boyle and H parameters, that underlie the Zeno line and the curve of ideal enthalpy. Furthermore, a detailed analysis reveals that dipole-dipole interactions lead to much larger enthalpic contributions to the Gibbs free energy. This accounts for the much higher temperatures and pressures that are necessary for supercritical H2O to achieve ideal-like thermodynamic properties.
The Behavior of Regular Satellites During the Planetary Migration
NASA Astrophysics Data System (ADS)
Nogueira, Erica Cristina; Gomes, R. S.; Brasser, R.
2013-05-01
Abstract (2,250 Maximum Characters): The behavior of the regular satellites of the giant planets during the instability phase of the Nice model needs to be better understood. In order to explain this behavior, we used numerical simulations to investigate the evolution of the regular satellite systems of the ice giants when these two planets experienced encounters with the gas giants. For the initial conditions we placed an ice planet in between Jupiter and Saturn, according to the evolution of Nice model simulations in a ‘jumping Jupiter’ scenario (Brasser et al. 2009). We used the MERCURY integrator (Chambers 1999) and cloned simulations by slightly modifying the Hybrid integrator changeover parameter. We obtained 101 successful runs which kept all planets, of which 24 were jumping Jupiter cases. Subsequently we performed additional numerical integrations in which the ice giant that encountered a gas giant was started on the same orbit but with its regular satellites included. This is done as follows: For each of the 101 basic runs, we save the orbital elements of all objects in the integration at all close encounter events. Then we performed a backward integration to start the system 100 years before the encounter and re-enacted the forward integration with the regular satellites around the ice giant. These integrations ran for 1000 years. The final orbital elements of the satellites with respect to the ice planet were used to restart the integration for the next planetary encounter (if any). If we assume that Uranus is the ice planet that had encounters with a gas giant, we considered the satellites Miranda, Ariel, Umbriel, Titania and Oberon with their present orbits around the planet. For Neptune we introduced Triton with an orbit with a 15% larger than the actual semi-major axis to account for the tidal decay from the LHB to present time. We also assume that Triton was captured through binary disruption (Agnor and Hamilton 2006, Nogueira et al. 2011) and
An inverse method with regularity condition for transonic airfoil design
NASA Technical Reports Server (NTRS)
Zhu, Ziqiang; Xia, Zhixun; Wu, Liyi
1991-01-01
It is known from Lighthill's exact solution of the incompressible inverse problem that in the inverse design problem, the surface pressure distribution and the free stream speed cannot both be prescribed independently. This implies the existence of a constraint on the prescribed pressure distribution. The same constraint exists at compressible speeds. Presented here is an inverse design method for transonic airfoils. In this method, the target pressure distribution contains a free parameter that is adjusted during the computation to satisfy the regularity condition. Some design results are presented in order to demonstrate the capabilities of the method.
Infants use temporal regularities to chunk objects in memory.
Kibbe, Melissa M; Feigenson, Lisa
2016-01-01
Infants, like adults, can maintain only a few items in working memory, but can overcome this limit by creating more efficient representations, or "chunks." Previous research shows that infants can form chunks using shared features or spatial proximity between objects. Here we asked whether infants also can create chunked representations using regularities that unfold over time. Thirteen-month old infants first were familiarized with four objects of different shapes and colors, presented in successive pairs. For some infants, the identities of objects in each pair varied randomly across familiarization (Experiment 1). For others, the objects within a pair always co-occurred, either in consistent relative spatial positions (Experiment 2a) or varying spatial positions (Experiment 2b). Following familiarization, infants saw all four objects hidden behind a screen and then saw the screen lifted to reveal either four objects or only three. Infants in Experiment 1, who had been familiarized with random object pairings, failed to look longer at the unexpected 3-object outcome; they showed the same inability to concurrently represent four objects as in other studies of infant working memory. In contrast, infants in Experiments 2a and 2b, who had been familiarized with regularly co-occurring pairs, looked longer at the unexpected outcome. These infants apparently used the co-occurrence between individual objects during familiarization to form chunked representations that were later deployed to track the objects as they were hidden at test. In Experiment 3, we confirmed that the familiarization affected infants' ability to remember the occluded objects rather than merely establishing longer-term memory for object pairs. Following familiarization to consistent pairs, infants who were not shown a hiding event (but merely saw the same test outcomes as in Experiments 2a and b) showed no preference for arrays of three versus four objects. Finally, in Experiments 4 and 5, we asked
Symbol calculus and zeta-function regularized determinants
Kaynak, Burak Tevfik; Turgut, O. Teoman
2007-11-15
In this work, we use semigroup integral to evaluate zeta-function regularized determinants. This is especially powerful for nonpositive operators such as the Dirac operator. In order to understand fully the quantum effective action, one should know not only the potential term but also the leading kinetic term. In this purpose, we use the Weyl type of symbol calculus to evaluate the determinant as a derivative expansion. The technique is applied both to a spin-0 bosonic operator and to the Dirac operator coupled to a scalar field.
An incremental interactive algorithm for regular grammar inference
Parekh, R.; Honavar, V.
1996-12-31
Grammar inference, a problem with many applications in pattern recognition and language learning, is defined as follows: For an unknown grammar G, given a finite set of positive examples S{sup +} that belong to L(G), and possibly a finite set of negative examples S{sup -}, infer a grammar G* equivalent to G. Different restrictions on S{sup +} and S{sup -} and the interaction of the learner with the teacher or the environment give rise to different variants of this task. We present an interactive incremental algorithm for inference of a finite state automaton (FSA) corresponding to an unknown regular grammar.
Regular Wave Propagation Out of Noise in Chemical Active Media
Alonso, S.; Sendina-Nadal, I.; Perez-Munuzuri, V.; Sancho, J. M.; Sagues, F.
2001-08-13
A pacemaker, regularly emitting chemical waves, is created out of noise when an excitable photosensitive Belousov-Zhabotinsky medium, strictly unable to autonomously initiate autowaves, is forced with a spatiotemporal patterned random illumination. These experimental observations are also reproduced numerically by using a set of reaction-diffusion equations for an activator-inhibitor model, and further analytically interpreted in terms of genuine coupling effects arising from parametric fluctuations. Within the same framework we also address situations of noise-sustained propagation in subexcitable media.
Regular wave propagation out of noise in chemical active media.
Alonso, S; Sendiña-Nadal, I; Pérez-Muñuzuri, V; Sancho, J M; Sagués, F
2001-08-13
A pacemaker, regularly emitting chemical waves, is created out of noise when an excitable photosensitive Belousov-Zhabotinsky medium, strictly unable to autonomously initiate autowaves, is forced with a spatiotemporal patterned random illumination. These experimental observations are also reproduced numerically by using a set of reaction-diffusion equations for an activator-inhibitor model, and further analytically interpreted in terms of genuine coupling effects arising from parametric fluctuations. Within the same framework we also address situations of noise-sustained propagation in subexcitable media.
Regular and anomalous quantum diffusion in the Fibonacci kicked rotator
Casati, G.; Mantica, G.; Shepelyansky, D. L.
2001-06-01
We study the dynamics of a quantum rotator, impulsively kicked according to the almost-periodic Fibonacci sequence. A special numerical technique allows us to carry on this investigation for as many as 10{sup 12} kicks. It is shown that above a critical kick strength, the excitation of the system is well described by regular diffusion, while below this border it becomes anomalous and subdiffusive. A law for the dependence of the exponent of anomalous subdiffusion on the kick strength is established numerically. The analogy between these results and quantum diffusion in models of quasicrystals and in the kicked Harper system is discussed.
Regular and anomalous quantum diffusion in the Fibonacci kicked rotator.
Casati, G; Mantica, G; Shepelyansky, D L
2001-06-01
We study the dynamics of a quantum rotator, impulsively kicked according to the almost-periodic Fibonacci sequence. A special numerical technique allows us to carry on this investigation for as many as 10(12) kicks. It is shown that above a critical kick strength, the excitation of the system is well described by regular diffusion, while below this border it becomes anomalous and subdiffusive. A law for the dependence of the exponent of anomalous subdiffusion on the kick strength is established numerically. The analogy between these results and quantum diffusion in models of quasicrystals and in the kicked Harper system is discussed.
Lipschitz Regularity for Elliptic Equations with Random Coefficients
NASA Astrophysics Data System (ADS)
Armstrong, Scott N.; Mourrat, Jean-Christophe
2016-01-01
We develop a higher regularity theory for general quasilinear elliptic equations and systems in divergence form with random coefficients. The main result is a large-scale L ∞-type estimate for the gradient of a solution. The estimate is proved with optimal stochastic integrability under a one-parameter family of mixing assumptions, allowing for very weak mixing with non-integrable correlations to very strong mixing (for example finite range of dependence). We also prove a quenched L 2 estimate for the error in homogenization of Dirichlet problems. The approach is based on subadditive arguments which rely on a variational formulation of general quasilinear divergence-form equations.
Sharp Regularity Results for Coulombic Many-Electron Wave Functions
NASA Astrophysics Data System (ADS)
Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas; Sørensen, Thomas Østergaard
2005-04-01
We show that electronic wave functions ψ of atoms and molecules have a representation ψ=ϕ, where is an explicit universal factor, locally Lipschitz, and independent of the eigenvalue and the solution ψ itself, and ϕ has second derivatives which are locally in L∞. This representation turns out to be optimal as can already be demonstrated with the help of hydrogenic wave functions. The proofs of these results are, in an essential way, based on a new elliptic regularity result which is of independent interest. Some identities that can be interpreted as cusp conditions for second order derivatives of ψ are derived.
Monotonicity Formula and Regularity for General Free Discontinuity Problems
NASA Astrophysics Data System (ADS)
Bucur, Dorin; Luckhaus, Stephan
2014-02-01
We give a general monotonicity formula for local minimizers of free discontinuity problems which have a critical deviation from minimality, of order d - 1. This result allows us to prove partial regularity results (that is closure and density estimates for the jump set) for a large class of free discontinuity problems involving general energies associated to the jump set, as for example free boundary problems with Robin conditions. In particular, we give a short proof to the De Giorgi-Carriero-Leaci result for the Mumford-Shah functional.
Lipschitz Regularity of the Eigenfunctions on Optimal Domains
NASA Astrophysics Data System (ADS)
Bucur, Dorin; Mazzoleni, Dario; Pratelli, Aldo; Velichkov, Bozhidar
2015-04-01
We study the optimal sets for spectral functionals of the form , which are bi-Lipschitz with respect to each of the eigenvalues of the Dirichlet Laplacian on , a prototype being the problem We prove the Lipschitz regularity of the eigenfunctions of the Dirichlet Laplacian on the optimal set and, as a corollary, we deduce that is open. For functionals depending only on a generic subset of the spectrum, as for example , our result proves only the existence of a Lipschitz continuous eigenfunction in correspondence to each of the eigenvalues involved.
Nonlinear image registration with bidirectional metric and reciprocal regularization
Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng
2017-01-01
Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal. PMID:28231342
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
Anomalies, Hawking radiations, and regularity in rotating black holes
Iso, Satoshi; Umetsu, Hiroshi; Wilczek, Frank
2006-08-15
This is an extended version of our previous letter [S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett. 96, 151302 (2006).]. In this paper we consider rotating black holes and show that the flux of Hawking radiation can be determined by anomaly cancellation conditions and regularity requirement at the horizon. By using a dimensional reduction technique, each partial wave of quantum fields in a d=4 rotating black hole background can be interpreted as a (1+1)-dimensional charged field with a charge proportional to the azimuthal angular momentum m. From this and the analysis [S. P. Robinson and F. Wilczek, Phys. Rev. Lett. 95, 011303 (2005), S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett. 96, 151302 (2006).] on Hawking radiation from charged black holes, we show that the total flux of Hawking radiation from rotating black holes can be universally determined in terms of the values of anomalies at the horizon by demanding gauge invariance and general coordinate covariance at the quantum level. We also clarify our choice of boundary conditions and show that our results are consistent with the effective action approach where regularity at the future horizon and vanishing of ingoing modes at r={infinity} are imposed (i.e. Unruh vacuum)
FPGA-accelerated algorithm for the regular expression matching system
NASA Astrophysics Data System (ADS)
Russek, P.; Wiatr, K.
2015-01-01
This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.
Interfacial turbulence and regularization in electrified falling films
NASA Astrophysics Data System (ADS)
Tseluiko, Dmitri; Blyth, Mark; Lin, Te-Sheng; Kalliadasis, Serafim
2016-11-01
Consider a liquid film flowing down an inclined wall and subjected to a normal electric field. Previous studies on the problem invoked the long-wave approximation. Here, for the first time, we analyze the Stokes-flow regime using both a non-local long-wave model and the full system of governing equations. For an obtuse inclination angle and strong surface tension, the evolution of the interface is chaotic in space and time. However, a sufficiently strong electric field has a regularizing effect, and the time-dependent solution evolves into an array of continuously interacting pulses, each of which resembles a single-hump solitary pulse. This is the so-called interfacial turbulence regime. For an acute inclination angle and a sufficiently small supercritical value of the electric field, solitary-pulse solutions do not exist, and the time-dependent solution is instead a modulated array of short-wavelength waves. When the electric field is increased, the evolution of the interface first becomes chaotic, but then is regularized so that an array of pulses is generated. A coherent-structure theory for such pulses is developed and corroborated by numerical simulations. This work was supported by the EPSRC under Grants EP/J001740/1 and EP/K041134/1.
Regularities and symmetries in atomic structure and spectra
NASA Astrophysics Data System (ADS)
Pain, Jean-Christophe
2013-09-01
The use of statistical methods for the description of complex quantum systems was primarily motivated by the failure of a line-by-line interpretation of atomic spectra. Such methods reveal regularities and trends in the distributions of levels and lines. In the past, much attention was paid to the distribution of energy levels (Wigner surmise, random-matrix model…). However, information about the distribution of the lines (energy and strength) is lacking. Thirty years ago, Learner found empirically an unexpected law: the logarithm of the number of lines whose intensities lie between 2kI0 and 2k+1I0, I0 being a reference intensity and k an integer, is a decreasing linear function of k. In the present work, the fractal nature of such an intriguing regularity is outlined and a calculation of its fractal dimension is proposed. Other peculiarities are also presented, such as the fact that the distribution of line strengths follows Benford's law of anomalous numbers, the existence of additional selection rules (PH coupling), the symmetry with respect to a quarter of the subshell in the spin-adapted space (LL coupling) and the odd-even staggering in the distribution of quantum numbers, pointed out by Bauche and Cossé.
Anxiety, Depression and Emotion Regulation Among Regular Online Poker Players.
Barrault, Servane; Bonnaire, Céline; Herrmann, Florian
2017-01-19
Poker is a type of gambling that has specific features, including the need to regulate one's emotion to be successful. The aim of the present study is to assess emotion regulation, anxiety and depression in a sample of regular poker players, and to compare the results of problem and non-problem gamblers. 416 regular online poker players completed online questionnaires including sociodemographic data, measures of problem gambling (CPGI), anxiety and depression (HAD scale), and emotion regulation (ERQ). The CPGI was used to divide participants into four groups according to the intensity of their gambling practice (non-problem, low risk, moderate risk and problem gamblers). Anxiety and depression were significantly higher among severe-problem gamblers than among the other groups. Both significantly predicted problem gambling. On the other hand, there was no difference between groups in emotion regulation (cognitive reappraisal and expressive suppression), which was linked neither to problem gambling nor to anxiety and depression (except for cognitive reappraisal, which was significantly correlated to anxiety). Our results underline the links between anxiety, depression and problem gambling among poker players. If emotion regulation is involved in problem gambling among poker players, as strongly suggested by data from the literature, the emotion regulation strategies we assessed (cognitive reappraisal and expressive suppression) may not be those involved. Further studies are thus needed to investigate the involvement of other emotion regulation strategies.
Regularized Moment Equations and Shock Waves for Rarefied Granular Gas
NASA Astrophysics Data System (ADS)
Reddy, Lakshminarayana; Alam, Meheboob
2016-11-01
It is well-known that the shock structures predicted by extended hydrodynamic models are more accurate than the standard Navier-Stokes model in the rarefied regime, but they fail to predict continuous shock structures when the Mach number exceeds a critical value. Regularization or parabolization is one method to obtain smooth shock profiles at all Mach numbers. Following a Chapman-Enskog-like method, we have derived the "regularized" version 10-moment equations ("R10" moment equations) for inelastic hard-spheres. In order to show the advantage of R10 moment equations over standard 10-moment equations, the R10 moment equations have been employed to solve the Riemann problem of plane shock waves for both molecular and granular gases. The numerical results are compared between the 10-moment and R10-moment models and it is found that the 10-moment model fails to produce continuous shock structures beyond an upstream Mach number of 1 . 34 , while the R10-moment model predicts smooth shock profiles beyond the upstream Mach number of 1 . 34 . The density and granular temperature profiles are found to be asymmetric, with their maxima occurring within the shock-layer.
Regularization approach for tomosynthesis X-ray inspection
Tigkos, Konstantinos; Hassler, Ulf; Holub, Wolfgang; Woerlein, Norbert; Rehak, Markus
2014-02-18
X-ray inspection is intended to be used as an escalation technique for inspection of carbon fiber reinforced plastics (CFRP) in aerospace applications, especially in case of unclear indications from ultrasonic or other NDT modalities. Due to their large dimensions, most aerospace components cannot be scanned by conventional computed tomography. In such cases, X-ray Laminography may be applied, allowing a pseudo 3D slice-by-slice reconstruction of the sample with Tomosynthesis. However, due to the limited angle acquisition geometry, reconstruction artifacts arise, especially at surfaces parallel to the imaging plane. To regularize the Tomosynthesis approach, we propose an additional prescan of the object to detect outer sample surfaces. We recommend the use of contrasted markers which are temporarily attached to the sample surfaces. The depth position of the markers is then derived from that prescan. As long as the sample surface remains simple, few markers are required to fit the respective object surfaces. The knowledge about this surface may then be used to regularize the final Tomosynthesis reconstruction, performed with markerless projections. Eventually, it can also serve as prior information for an ART reconstruction or to register a CAD model of the sample. The presented work is carried out within the European FP7 project QUICOM. We demonstrate the proposed approach within a simulation study applying an acquisition geometry suited for CFRP part inspection. A practical verification of the approach is planned later in the project.
Relational-Regularized Discriminative Sparse Learning for Alzheimer's Disease Diagnosis.
Lei, Baiying; Yang, Peng; Wang, Tianfu; Chen, Siping; Ni, Dong
2017-01-16
Accurate identification and understanding informative feature is important for early Alzheimer's disease (AD) prognosis and diagnosis. In this paper, we propose a novel discriminative sparse learning method with relational regularization to jointly predict the clinical score and classify AD disease stages using multimodal features. Specifically, we apply a discriminative learning technique to expand the class-specific difference and include geometric information for effective feature selection. In addition, two kind of relational information are incorporated to explore the intrinsic relationships among features and training subjects in terms of similarity learning. We map the original feature into the target space to identify the informative and predictive features by sparse learning technique. A unique loss function is designed to include both discriminative learning and relational regularization methods. Experimental results based on a total of 805 subjects [including 226 AD patients, 393 mild cognitive impairment (MCI) subjects, and 186 normal controls (NCs)] from AD neuroimaging initiative database show that the proposed method can obtain a classification accuracy of 94.68% for AD versus NC, 80.32% for MCI versus NC, and 74.58% for progressive MCI versus stable MCI, respectively. In addition, we achieve remarkable performance for the clinical scores prediction and classification label identification, which has efficacy for AD disease diagnosis and prognosis. The algorithm comparison demonstrates the effectiveness of the introduced learning techniques and superiority over the state-of-the-arts methods.
Suggesting Missing Relations in Biomedical Ontologies Based on Lexical Regularities.
Quesada-Martínez, Manuel; Fernández-Breis, Jesualdo Tomás; Karlsson, Daniel
2016-01-01
The number of biomedical ontologies has increased significantly in recent years. Many of such ontologies are the result of efforts of communities of domain experts and ontology engineers. The development and application of quality assurance (QA) methods should help these communities to develop useful ontologies for both humans and machines. According to previous studies, biomedical ontologies are rich in natural language content, but most of them are not so rich in axiomatic terms. Here, we are interested in studying the relation between content in natural language and content in axiomatic form. The analysis of the labels of the classes permits to identify lexical regularities (LRs), which are sets of words that are shared by labels of different classes. Our assumption is that the classes exhibiting an LR should be logically related through axioms, which is used to propose an algorithm to detect missing relations in the ontology. Here, we analyse a lexical regularity of SNOMED CT, congenital stenosis, which is reported as problematic by the SNOMED CT maintenance team.
Filter ensemble regularized common spatial pattern for EEG classification
NASA Astrophysics Data System (ADS)
Su, Yuxi; Li, Yali; Wang, Shengjin
2015-07-01
Common Spatial Pattern (CSP) is one of the most effective feature extraction algorithm for Brain-Computer Interfaces (BCI). Despite its advantages of wide versatility and high efficiency, CSP is shown to be non-robust to noise and prone to over fitting when training sample number is limited. In order to overcome these problems, Regularized Common Spatial Pattern (RCSP) is further proposed. RCSP regularized covariance matrix estimation by two parameters, which reduces the estimation difference and improves the stationarity under small sample condition. However, RCSP does not make full use of the frequency information. In this paper, we presents a filter ensemble technique for RCSP (FERCSP) to further extract frequency information and aggregate all the RCSPs efficiently to get an ensemble-based solution. The performance of the proposed algorithm is evaluated on data set IVa of BCI Competition III against other five RCSPbased algorithms. The experimental results show that FERCSP significantly outperforms those of the existing methods in classification accuracy. The FERCSP outperforms the CSP algorithm and R-CSP-A algorithm in all five subjects with an average improvement of 6% in accuracy.
Localized Multiple Kernel Learning With Dynamical Clustering and Matrix Regularization.
Han, Yina; Yang, Kunde; Yang, Yixin; Ma, Yuanliang
2016-12-20
Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features with regard to their discriminative power for each individual sample. However, the learning of numerous local solutions may not scale well even for a moderately sized training set, and the independently learned local models may suffer from overfitting. Hence, in existing local methods, the distributed samples are typically assumed to share the same weights, and various unsupervised clustering methods are applied as preprocessing. In this paper, to enable the learner to discover and benefit from the underlying local coherence and diversity of the samples, we incorporate the clustering procedure into the canonical support vector machine-based LMKL framework. Then, to explore the relatedness among different samples, which has been ignored in a vector ℓp-norm analysis, we organize the cluster-specific kernel weights into a matrix and introduce a matrix-based extension of the ℓp-norm for constraint enforcement. By casting the joint optimization problem as a problem of alternating optimization, we show how the cluster structure is gradually revealed and how the matrix-regularized kernel weights are obtained. A theoretical analysis of such a regularizer is performed using a Rademacher complexity bound, and complementary empirical experiments on real-world data sets demonstrate the effectiveness of our technique.
A Bayesian regularized artificial neural network for adaptive optics forecasting
NASA Astrophysics Data System (ADS)
Sun, Zhi; Chen, Ying; Li, Xinyang; Qin, Xiaolin; Wang, Huiyong
2017-01-01
Real-time adaptive optics is a technology for enhancing the resolution of ground-based optical telescopes and overcoming the disturbance of atmospheric turbulence. The performance of the system is limited by delay errors induced by the servo system and photoelectrons noise of wavefront sensor. In order to cut these delay errors, this paper proposes a novel model to forecast the future control voltages of the deformable mirror. The predictive model is constructed by a multi-layered back propagation network with Bayesian regularization (BRBP). For the purpose of parallel computation and less disturbance, we adopt a number of sub-BP neural networks to substitute the whole network. The Bayesian regularized network assigns a probability to the network weights, allowing the network to automatically and optimally penalize excessively complex models. The simulation results show that the BRBP introduces smaller mean absolute percentage error (MAPE) and mean square errors (MSE) than other typical algorithms. Meanwhile, real data analysis results show that the BRBP model has strong generalization capability and parallelism.
Influence of a Regular, Standardized Meal on Clinical Chemistry Analytes
Salvagno, Gian Luca; Lippi, Giuseppe; Gelati, Matteo; Montagnana, Martina; Danese, Elisa; Picheth, Geraldo; Guidi, Gian Cesare
2012-01-01
Background Preanalytical variability, including biological variability and patient preparation, is an important source of variability in laboratory testing. In this study, we assessed whether a regular light meal might bias the results of routine clinical chemistry testing. Methods We studied 17 healthy volunteers who consumed light meals containing a standardized amount of carbohydrates, proteins, and lipids. We collected blood for routine clinical chemistry tests before the meal and 1, 2, and 4 hr thereafter. Results One hour after the meal, triglycerides (TG), albumin (ALB), uric acid (UA), phosphatase (ALP), Ca, Fe, and Na levels significantly increased, whereas blood urea nitrogen (BUN) and P levels decreased. TG, ALB, Ca, Na, P, and total protein (TP) levels varied significantly. Two hours after the meal, TG, ALB, Ca, Fe, and Na levels remained significantly high, whereas BUN, P, UA, and total bilirubin (BT) levels decreased. Clinically significant variations were recorded for TG, ALB, ALT, Ca, Fe, Na, P, BT, and direct bilirubin (BD) levels. Four hours after the meal, TG, ALB, Ca, Fe, Na, lactate dehydrogenase (LDH), P, Mg, and K levels significantly increased, whereas UA and BT levels decreased. Clinically significant variations were observed for TG, ALB, ALT, Ca, Na, Mg, K, C-reactive protein (CRP), AST, UA, and BT levels. Conclusions A significant variation in the clinical chemistry parameters after a regular meal shows that fasting time needs to be carefully considered when performing tests to prevent spurious results and reduce laboratory errors, especially in an emergency setting. PMID:22779065
Exploring local regularities for 3D object recognition
NASA Astrophysics Data System (ADS)
Tian, Huaiwen; Qin, Shengfeng
2016-11-01
In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.
The ROI CT problem: a shearlet-based regularization approach
NASA Astrophysics Data System (ADS)
Bubba, T. A.; Porta, F.; Zanghirati, G.; Bonettini, S.
2016-10-01
The possibility to significantly reduce the X-ray radiation dose and shorten the scanning time is particularly appealing, especially for the medical imaging community. Region- of-interest Computed Tomography (ROI CT) has this potential and, for this reason, is currently receiving increasing attention. Due to the truncation of projection images, ROI CT is a rather challenging problem. Indeed, the ROI reconstruction problem is severely ill-posed in general and naive local reconstruction algorithms tend to be very unstable. To obtain a stable and reliable reconstruction, under suitable noise circumstances, we formulate the ROI CT problem as a convex optimization problem with a regularization term based on shearlets, and possibly nonsmooth. For the solution, we propose and analyze an iterative approach based on the variable metric inexact line-search algorithm (VMILA). The reconstruction performance of VMILA is compared against different regularization conditions, in the case of fan-beam CT simulated data. The numerical tests show that our approach is insensitive to the location of the ROI and remains very stable also when the ROI size is rather small.
Autocorrelation and regularization in digital images. I - Basic theory
NASA Technical Reports Server (NTRS)
Jupp, David L. B.; Strahler, Alan H.; Woodcock, Curtis E.
1988-01-01
Spatial structure occurs in remotely sensed images when the imaged scenes contain discrete objects that are identifiable in that their spectral properties are more homogeneous within than between them and other scene elements. The spatial structure introduced is manifest in statistical measures such as the autocovariance function and variogram associated with the scene, and it is possible to formulate these measures explicitly for scenes composed of simple objects of regular shapes. Digital images result from sensing scenes by an instrument with an associated point spread function (PSF). Since there is averaging over the PSF, the effect, termed regularization, induced in the image data by the instrument will influence the observable autocovariance and variogram functions of the image data. It is shown how the autocovariance or variogram of an image is a composition of the underlying scene covariance convolved with an overlap function, which is itself a convolution of the PSF. The functional form of this relationship provides an analytic basis for scene inference and eventual inversion of scene model parameters from image data.
Theory of volume transition in polyelectrolyte gels with charge regularization.
Hua, Jing; Mitra, Mithun K; Muthukumar, M
2012-04-07
We present a theory for polyelectrolyte gels that allow the effective charge of the polymer backbone to self-regulate. Using a variational approach, we obtain an expression for the free energy of gels that accounts for the gel elasticity, free energy of mixing, counterion adsorption, local dielectric constant, electrostatic interaction among polymer segments, electrolyte ion correlations, and self-consistent charge regularization on the polymer strands. This free energy is then minimized to predict the behavior of the system as characterized by the gel volume fraction as a function of external variables such as temperature and salt concentration. We present results for the volume transition of polyelectrolyte gels in salt-free solvents, solvents with monovalent salts, and solvents with divalent salts. The results of our theoretical analysis capture the essential features of existing experimental results and also provide predictions for further experimentation. Our analysis highlights the importance of the self-regularization of the effective charge for the volume transition of gels in particular, and for charged polymer systems in general. Our analysis also enables us to identify the dominant free energy contributions for charged polymer networks and provides a framework for further investigation of specific experimental systems.
Regularizing the divergent structure of light-front currents
Bakker, Bernard L. G.; Choi, Ho-Meoyng; Ji, Chueng-Ryong
2001-04-01
The divergences appearing in the (3+1)-dimensional fermion-loop calculations are often regulated by smearing the vertices in a covariant manner. Performing a parallel light-front calculation, we corroborate the similarity between the vertex-smearing technique and the Pauli-Villars regularization. In the light-front calculation of the electromagnetic meson current, we find that the persistent end-point singularity that appears in the case of point vertices is removed even if the smeared vertex is taken to the limit of the point vertex. Recapitulating the current conservation, we substantiate the finiteness of both valence and nonvalence contributions in all components of the current with the regularized bound-state vertex. However, we stress that each contribution, valence or nonvalence, depends on the reference frame even though the sum is always frame independent. The numerical taxonomy of each contribution including the instantaneous contribution and the zero-mode contribution is presented in the {pi}, K, and D-meson form factors.
Explicit B-spline regularization in diffeomorphic image registration.
Tustison, Nicholas J; Avants, Brian B
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.
Nonlocal Mumford-Shah regularizers for color image restoration.
Jung, Miyoun; Bresson, Xavier; Chan, Tony F; Vese, Luminita A
2011-06-01
We propose here a class of restoration algorithms for color images, based upon the Mumford-Shah (MS) model and nonlocal image information. The Ambrosio-Tortorelli and Shah elliptic approximations are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, texture is nonlocal in nature and requires semilocal/non-local information for efficient image denoising and restoration. Inspired from recent works (nonlocal means of Buades, Coll, Morel, and nonlocal total variation of Gilboa, Osher), we extend the local Ambrosio-Tortorelli and Shah approximations to MS functional (MS) to novel nonlocal formulations, for better restoration of fine structures and texture. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, color image super-resolution, and color filter array demosaicing. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. We also prove several characterizations of minimizers based upon dual norm formulations.
Mixed noise removal by weighted encoding with sparse nonlocal regularization.
Jiang, Jielin; Zhang, Lei; Yang, Jian
2014-06-01
Mixed noise removal from natural images is a challenging task since the noise distribution usually does not have a parametric model and has a heavy tail. One typical kind of mixed noise is additive white Gaussian noise (AWGN) coupled with impulse noise (IN). Many mixed noise removal methods are detection based methods. They first detect the locations of IN pixels and then remove the mixed noise. However, such methods tend to generate many artifacts when the mixed noise is strong. In this paper, we propose a simple yet effective method, namely weighted encoding with sparse nonlocal regularization (WESNR), for mixed noise removal. In WESNR, there is not an explicit step of impulse pixel detection; instead, soft impulse pixel detection via weighted encoding is used to deal with IN and AWGN simultaneously. Meanwhile, the image sparsity prior and nonlocal self-similarity prior are integrated into a regularization term and introduced into the variational encoding framework. Experimental results show that the proposed WESNR method achieves leading mixed noise removal performance in terms of both quantitative measures and visual quality.
Regularization of chaos by noise in electrically driven nanowire systems
NASA Astrophysics Data System (ADS)
Hessari, Peyman; Do, Younghae; Lai, Ying-Cheng; Chae, Junseok; Park, Cheol Woo; Lee, GyuWon
2014-04-01
The electrically driven nanowire systems are of great importance to nanoscience and engineering. Due to strong nonlinearity, chaos can arise, but in many applications it is desirable to suppress chaos. The intrinsically high-dimensional nature of the system prevents application of the conventional method of controlling chaos. Remarkably, we find that the phenomenon of coherence resonance, which has been well documented but for low-dimensional chaotic systems, can occur in the nanowire system that mathematically is described by two coupled nonlinear partial differential equations, subject to periodic driving and noise. Especially, we find that, when the nanowire is in either the weakly chaotic or the extensively chaotic regime, an optimal level of noise can significantly enhance the regularity of the oscillations. This result is robust because it holds regardless of whether noise is white or colored, and of whether the stochastic drivings in the two independent directions transverse to the nanowire are correlated or independent of each other. Noise can thus regularize chaotic oscillations through the mechanism of coherence resonance in the nanowire system. More generally, we posit that noise can provide a practical way to harness chaos in nanoscale systems.
A general framework for regularized, similarity-based image restoration.
Kheradmand, Amin; Milanfar, Peyman
2014-12-01
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
Regular and stochastic behavior of Parkinsonian pathological tremor signals
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Demin, S. A.; Panischev, O. Yu.; Hänggi, Peter; Timashev, S. F.; Vstovsky, G. V.
2006-09-01
Regular and stochastic behavior in the time series of Parkinsonian pathological tremor velocity is studied on the basis of the statistical theory of discrete non-Markov stochastic processes and flicker-noise spectroscopy. We have developed a new method of analyzing and diagnosing Parkinson's disease (PD) by taking into consideration discreteness, fluctuations, long- and short-range correlations, regular and stochastic behavior, Markov and non-Markov effects and dynamic alternation of relaxation modes in the initial time signals. The spectrum of the statistical non-Markovity parameter reflects Markovity and non-Markovity in the initial time series of tremor. The relaxation and kinetic parameters used in the method allow us to estimate the relaxation scales of diverse scenarios of the time signals produced by the patient in various dynamic states. The local time behavior of the initial time correlation function and the first point of the non-Markovity parameter give detailed information about the variation of pathological tremor in the local regions of the time series. The obtained results can be used to find the most effective method of reducing or suppressing pathological tremor in each individual case of a PD patient. Generally, the method allows one to assess the efficacy of the medical treatment for a group of PD patients.
Talking Physics to Regular People: The Why and the How
NASA Astrophysics Data System (ADS)
Perkowitz, Sidney
2013-04-01
The huge popular interest in the Higgs boson shows that non-physicists can be fascinated by the ideas of physics, even highly abstract ones. That's one good reason to talk physics to ``regular people.'' A second important reason is that society supports physics and in return, deserves to know what physicists are doing. Another is the need to engage young people who may become physicists. Yet another is that when we translate our work so anyone can grasp it, we ourselves better understand it and what it means outside the lab. Especially in today's climate where funding for science, and science itself, are under threat, it's essential that regular people know us, what we do, and why it is important. That's the ``why'' of talking physics. To discuss the ``how,'' I'll draw on my long and extensive experience in presenting physics, technology and science to non-scientists through books and articles, blogs, videos, lectures, stage and museum works, and media appearances (see http://sidneyperkowitz.net). I'll offer ideas about talking physics to different groups, at different levels, and for different purposes, and about how to use such outreach to enrich your own career in physics while helping the physics community.
Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.
Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C
2015-03-01
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion.
Regular patterns in subglacial bedforms demonstrate emergent field behaviour
NASA Astrophysics Data System (ADS)
Clark, Chris; Ely, Jeremy; Spagnolo, Matteo; Hahn, Ute; Stokes, Chris; Hughes, Anna
2016-04-01
Somewhat counter-intuitively, ice-sheets abhor flat beds when flowing over soft sedimentary substrates. Instead, they produce an undulated surface, metres in relief and with length-scales of hundreds of metres. The resistive stresses that such bumps impart on ice flow affect the functioning of ice sheets by slowing ice transfer to lower elevations for melting and calving. The most abundant roughness elements are drumlins, streamlined in the direction of ice flow. Understanding their formation has eluded scientific explanation for almost two centuries with the literature seeking mechanistic explanations for individual bumps. Here we analyse tens of thousands of drumlins and find that they possess a strong regularity in their spatial positioning, which requires interactions between drumlins during their formation. This demonstrates a pattern-forming behaviour that requires explanation at the scale of drumlinised landscapes, beyond that of individual drumlins. Such regularity is expected to arise from interdependence between ice flow, sediment flux and the shape of the bed, with drumlins representing a specific emergent property of these interactions. That bed roughness is found to organise itself into specific, predictable and patterned length-scales might assist next generation of 'sliding laws' that incorporate ice-bed interactions, thereby improving modelling of ice-sheet flow.
Nonlinear image registration with bidirectional metric and reciprocal regularization.
Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng
2017-01-01
Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal.
Regularized feature reconstruction for spatio-temporal saliency detection.
Ren, Zhixiang; Gao, Shenghua; Chia, Liang-Tien; Rajan, Deepu
2013-08-01
Multimedia applications such as image or video retrieval, copy detection, and so forth can benefit from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatio-temporal saliency detection framework on the basis of regularized feature reconstruction. Specifically, for video saliency detection, both the temporal and spatial saliency detection are considered. For temporal saliency, we model the movement of the target patch as a reconstruction process using the patches in neighboring frames. A Laplacian smoothing term is introduced to model the coherent motion trajectories. With psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, regularizer, and local trajectory contrast to measure the temporal saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined together to favor salient regions with high confidence for video saliency detection. We also apply the spatial saliency part of the spatio-temporal model to image saliency detection. Experimental results on a human fixation video dataset and an image saliency detection dataset show that our method achieves the best performance over several state-of-the-art approaches.
Statistical regularities in the rank-citation profile of scientists.
Petersen, Alexander M; Stanley, H Eugene; Succi, Sauro
2011-01-01
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the β(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C(i) tallied from a scientist's N(i) papers scales as [Formula: see text]. Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
NASA Astrophysics Data System (ADS)
Buong, Nguyen; Dung, Nguyen Dinh
2014-03-01
In this paper, we present a regularized parameter choice in a new regularization method of Browder-Tikhonov type, for finding a common solution of a finite system of ill-posed operator equations involving Lipschitz continuous and accretive mappings in a real reflexive and strictly convex Banach space with a uniformly Gateaux differentiate norm. An estimate for convergence rates of regularized solution is also established.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
Oral infection, regular alcohol drinking pattern, and myocardial infarction.
Håheim, Lise Lund; Olsen, Ingar; Rønningen, Kjersti S
2012-12-01
Oral infections have been associated with an increased risk for myocardial infarction (MI) and other cardiovascular diseases (CVD). Conversely, low, regular alcohol consumption is associated with a lower association of CVD. The objective was to test the novel hypothesis that oral infections are modified by regular alcohol drinking which has the effect of lowering the incidence of MI's. The effect has been observed where tooth extractions where carried out due to infections and compared with extractions unconnected to infections. Oral infections and in particular periodontal infections impose an infectious load on the health in many people. In its advanced forms (periodontal pockets ≥ 6mm) periodontitis affects ∼10-15% of adults. The infection runs a chronic course with exacerbations. The bacteria cause local infection destructive to the supporting tissues of the teeth and have been detected in systemic diseases through bacterial products and bacteria entering the circulation. The often persistent, long term history of chronic periodontal infection in individuals is a challenge to the immune system. Over 700 oral bacteria and other microorganisms have been identified, many of which are virulent. Control of the level of oral microbiota is through well known oral hygiene measures. Alcohol by being bactericidal is a factor that may reduce the bacterial level in the oral cavity. If this effect truly exists, it should be observed through reduction of infections in the mouth. Tooth extraction is the ultimate consequence of periodontal and dental infections and a reduction of tooth extraction due to infections should therefore be observed. The hypothesis was tested using the screening data of the Oslo II-study in a cross sectional analysis. The Oslo-study included men aged 48-67 years. The main finding was that the effect of a drinking pattern of 2-7 times per week reduced the risk of MI among men who had a history of tooth extractions due to infections versus tooth
El Maestro de Sala Regular de Clases Ante el Proceso de Inclusion del Nino Con Impedimento
ERIC Educational Resources Information Center
Rosa Morales, Awilda
2012-01-01
The purpose of this research was to describe the experiences of regular class elementary school teachers with the Puerto Rico Department of Education who have worked with handicapped children who have been integrated to the regular classroom. Five elementary level regular class teachers were selected in the northwest zone of Puerto Rico who during…
ERIC Educational Resources Information Center
Furey, Eileen M.; Strauch, James D.
1983-01-01
The results indicate that, while there is consonance between the self-perceived skills and knowledge of special educators and how regular educators view them, there is an apparent dissonance between how regular educators view themselves and special educators' perceptions of regular elementary teachers. (Author)
Age-related changes in the use of regular patterns for auditory scene analysis.
Rimmele, Johanna; Schröger, Erich; Bendixen, Alexandra
2012-07-01
A recent approach to auditory processing suggests a close relationship of regularity processing in auditory sensory memory (ASM) and stream segregation, such that within-stream regularities can be used to stabilize stream segregation. The present study investigates age-related changes in how regular patterns are used for auditory scene analysis (ASA), when the stream containing the regularity is attended or unattended. In order to accomplish an intensity level deviant detection task, participants had to segregate the task-relevant pure tone sequence from an irrelevant distractor pure tone sequence, which randomly varied in level. In three conditions a simple spectro-temporal regularity ("Isochronous"), a more complex spectro-temporal regularity ("Rhythmic"), or no regularity ("Random") was embedded in either the attended target sequence (Experiment 1), or the unattended distractor sequence (Experiment 2). When the sequence containing the regularity was attended, older participants showed a similar increase of performance to younger adults in the conditions with regular patterns ("Isochronous" and "Rhythmic") compared to the "Random" condition. In contrast, when the sequence containing the regularity was unattended, older adults showed a specific performance decline compared to younger adults in the "Isochronous" condition. Results suggest a link between impaired automatic processing of regularities in ASM, and age-related deficits in the use of regular patterns for ASA.
ERIC Educational Resources Information Center
Halvorsen, Ann T.; And Others
This needs assessment instrument was developed as part of the PEERS (Providing Education for Everyone in Regular Schools) Project, a California project to integrate students with severe disabilities who were previously at special centers into services at regular school sites and students who were in special classes in regular schools into general…
20 CFR 220.100 - Evaluation of disability for any regular employment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Evaluation of disability for any regular... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Evaluation of Disability § 220.100 Evaluation of disability... Railroad Retirement Act based on disability for any regular employment. Regular employment...
20 CFR 220.100 - Evaluation of disability for any regular employment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Evaluation of disability for any regular... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Evaluation of Disability § 220.100 Evaluation of disability... Railroad Retirement Act based on disability for any regular employment. Regular employment...
20 CFR 220.26 - Disability for any regular employment, defined.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Disability for any regular employment, defined... RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is...
20 CFR 220.100 - Evaluation of disability for any regular employment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Evaluation of disability for any regular... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Evaluation of Disability § 220.100 Evaluation of disability... Railroad Retirement Act based on disability for any regular employment. Regular employment...
20 CFR 220.26 - Disability for any regular employment, defined.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Disability for any regular employment... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child...
20 CFR 220.26 - Disability for any regular employment, defined.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Disability for any regular employment... RAILROAD RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child...
On the Distinction between Regular and Irregular Inflectional Morphology: Evidence from Dinka
ERIC Educational Resources Information Center
Ladd, D. Robert; Remijsen, Bert; Manyang, Caguor Adong
2009-01-01
Discussions of the psycholinguistic significance of regularity in inflectional morphology generally deal with languages in which regular forms can be clearly identified and revolve around whether there are distinct processing mechanisms for regular and irregular forms. We present a detailed description of Dinka's notoriously irregular noun number…
L1-Regularized Reconstruction Error as Alpha Matte
NASA Astrophysics Data System (ADS)
Johnson, Jubin; Cholakkal, Hisham; Rajan, Deepu
2017-04-01
Sampling-based alpha matting methods have traditionally followed the compositing equation to estimate the alpha value at a pixel from a pair of foreground (F) and background (B) samples. The (F,B) pair that produces the least reconstruction error is selected, followed by alpha estimation. The significance of that residual error has been left unexamined. In this letter, we propose a video matting algorithm that uses L1-regularized reconstruction error of F and B samples as a measure of the alpha matte. A multi-frame non-local means framework using coherency sensitive hashing is utilized to ensure temporal coherency in the video mattes. Qualitative and quantitative evaluations on a dataset exclusively for video matting demonstrate the effectiveness of the proposed matting algorithm.
Local graph regularized coding for salient object detection
NASA Astrophysics Data System (ADS)
Huo, Lina; Yang, Shuyuan; Jiao, Licheng; Wang, Shuang; Shi, Jiao
2016-07-01
Subspace segmentation based salient object detection has received increasing interests in recent years. To preserve the locality and similarity of regions, a grouping effect of representation is introduced to segment the salient object and background in subspace. Then a new saliency map is calculated by incorporating this local graph regularizer into coding, which explicitly explores the data self-representation model and thus locate more accurate salient regions. Moreover, a heuristic object-based dictionary from background superpixels is obtained in border set removing the image regions within the potential object regions. Experimental results on four large benchmark databases demonstrate that the proposed method performs favorably against eight recent state-of-the-art methods in terms of three evaluation criterions, with a reduction of MAE by 19.8% than GR and 29.3% than CB in the two SED datasets, respectively. Meanwhile, our method also runs faster than the comparative detection approaches.
Surface tension regularizes the crack singularity of adhesion.
Karpitschka, Stefan; van Wijngaarden, Leen; Snoeijer, Jacco H
2016-05-11
The elastic and adhesive properties of a solid surface can be quantified by indenting it with a rigid sphere. Indentation tests are classically described by the JKR-law when the solid is very stiff, while recent work highlights the importance of surface tension for exceedingly soft materials. Here we show that surface tension plays a crucial role even in stiff solids: Young's wetting angle emerges as a boundary condition and this regularizes the crack-like singularity at the edge of adhesive contacts. We find that the edge region exhibits a universal, self-similar structure that emerges from the balance of surface tension and elasticity. The similarity theory is solved analytically and provides a complete description of adhesive contacts, by which we reconcile global adhesion laws and local contact mechanics.
The effect of regularization on the reconstruction of ACAR data
NASA Astrophysics Data System (ADS)
Weber, J. A.; Ceeh, H.; Hugenschmidt, C.; Leitner, M.; Böni, P.
2014-04-01
The Fermi surface, i.e. the two-dimensional surface separating occupied and unoccupied states in k-space, is the defining property of a metal. Full information about its shape is mandatory for identifying nesting vectors or for validating band structure calculations. With the angular correlation of positron-electron annihilation radiation (ACAR) it is easy to get projections of the Fermi surface. Nevertheless it is claimed to be inexact compared to more common methods like the determination based on quantum oscillations or angle-resolved photoemission spectroscopy. In this article we will present a method for reconstructing the Fermi surface from projections with statistically correct data treatment which is able to increase accuracy by introducing different types of regularization.
Human behavioral regularity, fractional Brownian motion, and exotic phase transition
NASA Astrophysics Data System (ADS)
Li, Xiaohui; Yang, Guang; An, Kenan; Huang, Jiping
2016-08-01
The mix of competition and cooperation (C&C) is ubiquitous in human society, which, however, remains poorly explored due to the lack of a fundamental method. Here, by developing a Janus game for treating C&C between two sides (suppliers and consumers), we show, for the first time, experimental and simulation evidences for human behavioral regularity. This property is proved to be characterized by fractional Brownian motion associated with an exotic transition between periodic and nonperiodic phases. Furthermore, the periodic phase echoes with business cycles, which are well-known in reality but still far from being well understood. Our results imply that the Janus game could be a fundamental method for studying C&C among humans in society, and it provides guidance for predicting human behavioral activity from the perspective of fractional Brownian motion.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Neural network for constrained nonsmooth optimization using Tikhonov regularization.
Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun
2015-03-01
This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network.
Restriction enzyme cutting site distribution regularity for DNA looping technology.
Shang, Ying; Zhang, Nan; Zhu, Pengyu; Luo, Yunbo; Huang, Kunlun; Tian, Wenying; Xu, Wentao
2014-01-25
The restriction enzyme cutting site distribution regularity and looping conditions were studied systematically. We obtained the restriction enzyme cutting site distributions of 13 commonly used restriction enzymes in 5 model organism genomes through two novel self-compiled software programs. All of the average distances between two adjacent restriction sites fell sharply with increasing statistic intervals, and most fragments were 0-499 bp. A shorter DNA fragment resulted in a lower looping rate, which was also directly proportional to the DNA concentration. When the length was more than 500 bp, the concentration did not affect the looping rate. Therefore, the best known fragment length was longer than 500 bp, and did not contain the restriction enzyme cutting sites which would be used for digestion. In order to make the looping efficiencies reach nearly 100%, 4-5 single cohesive end systems were recommended to digest the genome separately.
Expansion shock waves in regularized shallow-water theory
NASA Astrophysics Data System (ADS)
El, Gennady A.; Hoefer, Mark A.; Shearer, Michael
2016-05-01
We identify a new type of shock wave by constructing a stationary expansion shock solution of a class of regularized shallow-water equations that include the Benjamin-Bona-Mahony and Boussinesq equations. An expansion shock exhibits divergent characteristics, thereby contravening the classical Lax entropy condition. The persistence of the expansion shock in initial value problems is analysed and justified using matched asymptotic expansions and numerical simulations. The expansion shock's existence is traced to the presence of a non-local dispersive term in the governing equation. We establish the algebraic decay of the shock as it is gradually eroded by a simple wave on either side. More generally, we observe a robustness of the expansion shock in the presence of weak dissipation and in simulations of asymmetric initial conditions where a train of solitary waves is shed from one side of the shock.
Regular and chaotic motion of high altitude satellites
NASA Astrophysics Data System (ADS)
Wytrzyszczak, I.; Breiter, S.; Borczyk, W.
We have computed the integrated autocorrelation function for different families of geosynchronous, inclined orbits in order to detect the regions of chaotic motion. In order to reduce the problems due to high eccentricity orbits, the logarithmic Hamiltonian regularization was applied and a symplectic integrator of the Wisdom Holman type was implemented. The orbits were integrated for an interval of 10,000 days. The results indicate that non-predictable orbits can be found in this relatively short time in the separatrix zone of the 1:1 tesseral resonance. Their chaotic nature results from the interchange between libration and circulation type of motion, and from the significant eccentricity growth, caused by the Kozai Lidov resonance. Some of these orbits intersects the Earth’s surface in time shorter than 20 years for a particular initial geometry of interacting bodies.
Predictors of irrational thinking in regular slot machine gamblers.
Delfabbro, P H; Winefield, A H
2000-03-01
Previous research has suggested that irrational thinking may play a central role in the maintenance of behavior in slot machine gambling (M. B. Walker, 1992b). The present study is an evaluation of the validity and predictors of irrational thinking in a sample of regular gamblers (N = 20) drawn from the general community. The results were generally consistent with earlier findings; 75% of gambling-related cognitions were found to be irrational. Irrationality was unrelated to the amount of money lost or won during sessions but was positively related to risk taking. The most common irrational cognitions included false beliefs concerning the extent to which outcomes could be controlled or predicted and the attribution of human qualities (personification) to gambling devices. Gender comparisons showed that women were more likely than men to personify the machines. The validity of the speaking-aloud approach and suggestions for future research are discussed.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
General Structure of Regularization Procedures in Image Reconstruction
NASA Astrophysics Data System (ADS)
Titterington, D. M.
1985-03-01
Regularization procedures are portrayed as compromises between the conflicting aims of fidelity with the observed image and perfect smoothness. The selection of an estimated image involves the choice of a prescription, indicating the manner of smoothing, and of a smoothing parameter, which defines the degree of smoothing. Prescriptions of the minimum-penalized- distance type are considered and are shown to be equivalent to maximum-penalized-smoothness prescriptions. These include, therefore, constrained least-squares and constrained maximum entropy methods. The formal link with Bayesian statistical analysis is pointed out. Two important methods of choosing the degree of smoothing are described, one based on criteria of consistency with the data and one based on minimizing a risk function. The latter includes minimum mean-squared error criteria. Although the maximum entropy method has some practical advantages, there seems no case for it to hold a special place on philosophical grounds, in the context of image reconstruction.
Numerical optimization method for packing regular convex polygons
NASA Astrophysics Data System (ADS)
Galiev, Sh. I.; Lisafina, M. S.
2016-08-01
An algorithm is presented for the approximate solution of the problem of packing regular convex polygons in a given closed bounded domain G so as to maximize the total area of the packed figures. On G a grid is constructed whose nodes generate a finite set W on G, and the centers of the figures to be packed can be placed only at some points of W. The problem of packing these figures with centers in W is reduced to a 0-1 linear programming problem. A two-stage algorithm for solving the resulting problems is proposed. The algorithm finds packings of the indicated figures in an arbitrary closed bounded domain on the plane. Numerical results are presented that demonstrate the effectiveness of the method.
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-01
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-22
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
Regularized discriminative spectral regression method for heterogeneous face matching.
Huang, Xiangsheng; Lei, Zhen; Fan, Mingyu; Wang, Xiao; Li, Stan Z
2013-01-01
Face recognition is confronted with situations in which face images are captured in various modalities, such as the visual modality, the near infrared modality, and the sketch modality. This is known as heterogeneous face recognition. To solve this problem, we propose a new method called discriminative spectral regression (DSR). The DSR maps heterogeneous face images into a common discriminative subspace in which robust classification can be achieved. In the proposed method, the subspace learning problem is transformed into a least squares problem. Different mappings should map heterogeneous images from the same class close to each other, while images from different classes should be separated as far as possible. To realize this, we introduce two novel regularization terms, which reflect the category relationships among data, into the least squares approach. Experiments conducted on two heterogeneous face databases validate the superiority of the proposed method over the previous methods.
Harmonic R matrices for scattering amplitudes and spectral regularization.
Ferro, Livia; Łukowski, Tomasz; Meneghelli, Carlo; Plefka, Jan; Staudacher, Matthias
2013-03-22
Planar N = 4 supersymmetric Yang-Mills theory appears to be integrable. While this allows one to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R matrix of the integrable N = 4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of nonintegrable four-dimensional field theories.
Calibration maintenance and transfer using Tikhonov regularization approaches.
Kalivas, John H; Siano, Gabriel G; Andries, Erik; Goicoechea, Hector C
2009-07-01
Maintaining multivariate calibrations is essential and involves keeping models developed on an instrument applicable to predicting new samples over time. Sometimes a primary instrument model is needed to predict samples measured on secondary instruments. This situation is referred to as calibration transfer. This paper reports on using a Tikhonov regularization (TR) based method in both cases. A distinction of the TR design for calibration maintenance and transfer is a defined weighting scheme for a small set of new (transfer or standardization) samples augmented to the full set of calibration samples. Because straight application of basic TR theory is not always possible with calibration maintenance and transfer, this paper develops a generic solution to always enable application of TR. Harmonious (bias/variance tradeoff) and parsimonious (effective rank) considerations for TR are compared with the same TR format applied to partial least squares (PLS), showing that both approaches are viable solutions to the calibration maintenance and transfer problems.
Superior regularity in erosion patterns by planar subsurface channeling.
Redinger, Alex; Hansen, Henri; Linke, Udo; Rosandi, Yudi; Urbassek, Herbert M; Michely, Thomas
2006-03-17
The onset of pattern formation through exposure of Pt(111) with 5 keV Ar(+) ions at grazing incidence has been studied at 550 K by scanning tunneling microscopy and is supplemented by molecular-dynamics simulations of single ion impacts. A consistent description of pattern formation in terms of atomic scale mechanisms is given. Most surprisingly, pattern formation depends crucially on the angle of incidence of the ions. As soon as this angle allows subsurface channeling of the ions, pattern regularity and alignment with respect to the ion beam greatly improves. These effects are traced back to the positionally aligned formation of vacancy islands through the damage created by the ions at dechanneling locations.
Simple regular black hole with logarithmic entropy correction
NASA Astrophysics Data System (ADS)
Morales-Durán, Nicolás; Vargas, Andrés F.; Hoyos-Restrepo, Paulina; Bargueño, Pedro
2016-10-01
A simple regular black hole solution satisfying the weak energy condition is obtained within Einstein-non-linear electrodynamics theory. We have computed the thermodynamic properties of this black hole by a careful analysis of the horizons and we have found that the usual Bekenstein-Hawking entropy gets corrected by a logarithmic term. Therefore, in this sense our model realises some quantum gravity predictions which add this kind of correction to the black hole entropy. In particular, we have established some similitudes between our model and a quadratic generalised uncertainty principle. This similitude has been confirmed by the existence of a remnant, which prevents complete evaporation, in agreement with the quadratic generalised uncertainty principle case.
Implicit learning of arithmetic regularities is facilitated by proximal contrast.
Prather, Richard W
2012-01-01
Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating example equations, similar to the implicit learning of artificial grammars. We expand this to the symbolic arithmetic system. Specifically we find that exposure to principle-inconsistent examples facilitates the acquisition of arithmetic principle knowledge if the equations are presented to the learning in a temporally proximate fashion. The results expand on research of the implicit learning of regularities and suggest that contrasting cases, show to facilitate explicit arithmetic learning, is also relevant to implicit learning of arithmetic.
Analytic regularization in Soft-Collinear Effective Theory
NASA Astrophysics Data System (ADS)
Becher, Thomas; Bell, Guido
2012-06-01
In high-energy processes which are sensitive to small transverse momenta, individual contributions from collinear and soft momentum regions are not separately well-defined in dimensional regularization. A simple possibility to solve this problem is to introduce additional analytic regulators. We point out that in massless theories the unregularized singularities only appear in real-emission diagrams and that the additional regulators can be introduced in such a way that gauge invariance and the factorized eikonal structure of soft and collinear emissions is maintained. This simplifies factorization proofs and implies, at least in the massless case, that the structure of Soft-Collinear Effective Theory remains completely unchanged by the presence of the additional regulators. Our formalism also provides a simple operator definition of transverse parton distribution functions.
Lipschitz regularity of solutions for mixed integro-differential equations
NASA Astrophysics Data System (ADS)
Barles, Guy; Chasseigne, Emmanuel; Ciomaga, Adina; Imbert, Cyril
We establish new Hölder and Lipschitz estimates for viscosity solutions of a large class of elliptic and parabolic nonlinear integro-differential equations, by the classical Ishii-Lions's method. We thus extend the Hölder regularity results recently obtained by Barles, Chasseigne and Imbert (2011). In addition, we deal with a new class of nonlocal equations that we term mixed integro-differential equations. These equations are particularly interesting, as they are degenerate both in the local and nonlocal term, but their overall behavior is driven by the local-nonlocal interaction, e.g. the fractional diffusion may give the ellipticity in one direction and the classical diffusion in the complementary one.
Diffuse light tomography to detect blood vessels using Tikhonov regularization
NASA Astrophysics Data System (ADS)
Kazanci, Huseyin O.; Jacques, Steven L.
2016-04-01
Detection of blood vessels within light-scattering tissues involves detection of subtle shadows as blood absorbs light. These shadows are diffuse but measurable by a set of source-detector pairs in a spatial array of sources and detectors on the tissue surface. The measured shadows can reconstruct the internal position(s) of blood vessels. The tomographic method involves a set of Ns sources and Nd detectors such that Nsd = Ns x Nd source-detector pairs produce Nsd measurements, each interrogating the tissue with a unique perspective, i.e., a unique region of sensitivity to voxels within the tissue. This tutorial report describes the reconstruction of the image of a blood vessel within a soft tissue based on such source-detector measurements, by solving a matrix equation using Tikhonov regularization. This is not a novel contribution, but rather a simple introduction to a well-known method, demonstrating its use in mapping blood perfusion.
Random Regular Networks with Distance-limited Interdependent Links
NASA Astrophysics Data System (ADS)
Lowinger, Steven; Kornbluth, Yosef; Cwilich, Gabriel; Buldyrev, Sergey
2014-03-01
We study the mutual percolation of a system composed of two interdependent random regular networks. We introduce a notion of distance, d, to explore the effects of the proximity of interdependent nodes on the cascade of failures after an initial attack. The nature of the transition through which the networks disintegrate depends on the parameters of the system, which are the degree of the nodes and the maximum distance between interdependent nodes. As the distance and degree increase, the collapse at the critical threshold changes from a second-order transition to a first-order one. The critical threshold monotonically increases with distance. We find a transitional case, in which a novel type of phase transition appears. The case d = 1 can be completely solved analytically and it maps into a discrete version of the Rényi parking problem.
Quantitative interferometric microscopy cytometer based on regularized optical flow algorithm
NASA Astrophysics Data System (ADS)
Xue, Liang; Vargas, Javier; Wang, Shouyu; Li, Zhenhua; Liu, Fei
2015-09-01
Cell detections and analysis are important in various fields, such as medical observations and disease diagnoses. In order to analyze the cell parameters as well as observe the samples directly, in this paper, we present an improved quantitative interferometric microscopy cytometer, which can monitor the quantitative phase distributions of bio-samples and realize cellular parameter statistics. The proposed system is able to recover the phase imaging of biological samples in the expanded field of view via a regularized optical flow demodulation algorithm. This algorithm reconstructs the phase distribution with high accuracy with only two interferograms acquired at different time points simplifying the scanning system. Additionally, the method is totally automatic, and therefore it is convenient for establishing a quantitative phase cytometer. Moreover, the phase retrieval approach is robust against noise and background. Excitingly, red blood cells are readily investigated with the quantitative interferometric microscopy cytometer system.
Chiral Thirring–Wess model with Faddeevian regularization
Rahaman, Anisur
2015-03-15
Replacing vector type of interaction of the Thirring–Wess model by the chiral type a new model is presented which is termed here as chiral Thirring–Wess model. Ambiguity parameters of regularization are so chosen that the model falls into the Faddeevian class. The resulting Faddeevian class of model in general does not possess Lorentz invariance. However we can exploit the arbitrariness admissible in the ambiguity parameters to relate the quantum mechanically generated ambiguity parameters with the classical parameter involved in the masslike term of the gauge field which helps to maintain physical Lorentz invariance instead of the absence of manifestly Lorentz covariance of the model. The phase space structure and the theoretical spectrum of this class of model have been determined through Dirac’s method of quantization of constraint system.
Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization
NASA Astrophysics Data System (ADS)
Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran
2016-06-01
We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D - d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D - 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.
Superior Regularity in Erosion Patterns by Planar Subsurface Channeling
Redinger, Alex; Hansen, Henri; Michely, Thomas; Linke, Udo; Rosandi, Yudi; Urbassek, Herbert M.
2006-03-17
The onset of pattern formation through exposure of Pt(111) with 5 keV Ar{sup +} ions at grazing incidence has been studied at 550 K by scanning tunneling microscopy and is supplemented by molecular-dynamics simulations of single ion impacts. A consistent description of pattern formation in terms of atomic scale mechanisms is given. Most surprisingly, pattern formation depends crucially on the angle of incidence of the ions. As soon as this angle allows subsurface channeling of the ions, pattern regularity and alignment with respect to the ion beam greatly improves. These effects are traced back to the positionally aligned formation of vacancy islands through the damage created by the ions at dechanneling locations.
Tikhonov regularization in Lp applied to inverse medium scattering
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Kazimierski, Kamil S.; Karamehmedović, Mirza
2013-07-01
This paper presents Tikhonov- and iterated soft-shrinkage regularization methods for nonlinear inverse medium scattering problems. Motivated by recent sparsity-promoting reconstruction schemes for inverse problems, we assume that the contrast of the medium is supported within a small subdomain of a known search domain and minimize Tikhonov functionals with sparsity-promoting penalty terms based on Lp-norms. Analytically, this is based on scattering theory for the Helmholtz equation with the refractive index in Lp, 1 < p < ∞, and on crucial continuity and compactness properties of the contrast-to-measurement operator. Algorithmically, we use an iterated soft-shrinkage scheme combined with the differentiability of the forward operator in Lp to approximate the minimizer of the Tikhonov functional. The feasibility of this approach together with the quality of the obtained reconstructions is demonstrated via numerical examples.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Settling velocity of microplastic particles of regular shapes.
Khatmullina, Liliya; Isachenko, Igor
2017-01-30
Terminal settling velocity of around 600 microplastic particles, ranging from 0.5 to 5mm, of three regular shapes was measured in a series of sink experiments: Polycaprolactone (material density 1131kgm(-3)) spheres and short cylinders with equal dimensions, and long cylinders cut from fishing lines (1130-1168kgm(-3)) of different diameters (0.15-0.71mm). Settling velocities ranging from 5 to 127mms(-1) were compared with several semi-empirical predictions developed for natural sediments showing reasonable consistency with observations except for the case of long cylinders, for which the new approximation is proposed. The effect of particle's shape on its settling velocity is highlighted, indicating the need of further experiments with real marine microplastics of different shapes and the necessity of the development of reasonable parameterization of microplastics settling for proper modeling of their transport in the water column.
Temporal and spatial regularity of mobile-phone data
NASA Astrophysics Data System (ADS)
Hoevel, Philipp; Barabasi, Albert-Laszlo
2012-02-01
Network science is a vibrant, interdisciplinary research area with strong connections to a plethora of different fields. As the amount of empirically obtained datasets increases more and more, approaches from network sciences continue to enhance our understanding, for instance, of human dynamics. The available data often consist of temporal as well as spatial information. In our case they originate from anonymized mobile-phone traces, which include information about the timing of the connections between two mobile phones and also their positions. Thus, the data contains an additional social component. In this study, we evaluate patterns of human behavior identifying both temporal and spatial regularity. This leads to a detailed mobility analysis on various timescales and contributes to a general theory of synchronization in complex, real-world networks.
Regularity for steady periodic capillary water waves with vorticity.
Henry, David
2012-04-13
In the following, we prove new regularity results for two-dimensional steady periodic capillary water waves with vorticity, in the absence of stagnation points. Firstly, we prove that if the vorticity function has a Hölder-continuous first derivative, then the free surface is a smooth curve and the streamlines beneath the surface will be real analytic. Furthermore, once we assume that the vorticity function is real analytic, it will follow that the wave surface profile is itself also analytic. A particular case of this result includes irrotational fluid flow where the vorticity is zero. The property of the streamlines being analytic allows us to gain physical insight into small-amplitude waves by justifying a power-series approach.
Baseline Regularization for Computational Drug Repositioning with Longitudinal Observational Data
Kuang, Zhaobin; Thomson, James; Caldwell, Michael; Peissig, Peggy; Stewart, Ron; Page, David
2016-01-01
Computational Drug Repositioning (CDR) is the knowledge discovery process of finding new indications for existing drugs leveraging heterogeneous drug-related data. Longitudinal observational data such as Electronic Health Records (EHRs) have become an emerging data source for CDR. To address the high-dimensional, irregular, subject and time-heterogeneous nature of EHRs, we propose Baseline Regularization (BR) and a variant that extend the one-way fixed effect model, which is a standard approach to analyze small-scale longitudinal data. For evaluation, we use the proposed methods to search for drugs that can lower Fasting Blood Glucose (FBG) level in the Marshfield Clinic EHR. Experimental results suggest that the proposed methods are capable of rediscovering drugs that can lower FBG level as well as identifying some potential blood sugar lowering drugs in the literature.
Regular paths in SparQL: querying the NCI Thesaurus.
Detwiler, Landon T; Suciu, Dan; Brinkley, James F
2008-11-06
OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.
C1,1 regularity for degenerate elliptic obstacle problems
NASA Astrophysics Data System (ADS)
Daskalopoulos, Panagiota; Feehan, Paul M. N.
2016-03-01
The Heston stochastic volatility process is a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order, degenerate-elliptic partial differential operator, where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. In mathematical finance, solutions to the obstacle problem for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset. With the aid of weighted Sobolev spaces and weighted Hölder spaces, we establish the optimal C 1 , 1 regularity (up to the boundary of the half-plane) for solutions to obstacle problems for the elliptic Heston operator when the obstacle functions are sufficiently smooth.
Scene recognition by manifold regularized deep learning architecture.
Yuan, Yuan; Mou, Lichao; Lu, Xiaoqiang
2015-10-01
Scene recognition is an important problem in the field of computer vision, because it helps to narrow the gap between the computer and the human beings on scene understanding. Semantic modeling is a popular technique used to fill the semantic gap in scene recognition. However, most of the semantic modeling approaches learn shallow, one-layer representations for scene recognition, while ignoring the structural information related between images, often resulting in poor performance. Modeled after our own human visual system, as it is intended to inherit humanlike judgment, a manifold regularized deep architecture is proposed for scene recognition. The proposed deep architecture exploits the structural information of the data, making for a mapping between visible layer and hidden layer. By the proposed approach, a deep architecture could be designed to learn the high-level features for scene recognition in an unsupervised fashion. Experiments on standard data sets show that our method outperforms the state-of-the-art used for scene recognition.
Pauli-Villars Regularization of Non-Abelian Gauge Theories
NASA Astrophysics Data System (ADS)
Hiller, J. R.
2016-07-01
As an extension of earlier work on QED, we construct a BRST-invariant Lagrangian for SU(N) Yang-Mills theory with fundamental matter, regulated by the inclusion of massive Pauli-Villars (PV) gluons and PV quarks. The underlying gauge symmetry for massless PV gluons is generalized to accommodate the PV-index-changing currents that are required by the regularization. Auxiliary adjoint scalars are used, in a mechanism due to Stueckelberg, to attribute mass to the PV gluons and the PV quarks. The addition of Faddeev-Popov ghosts then establishes a residual BRST symmetry. Although there are drawbacks to the approach, in particular the computational load of a large number of PV fields and a nonlocal interaction of the ghost fields, this formulation could provide a foundation for renormalizable nonperturbative solutions of light-front QCD in an arbitrary covariant gauge.
Stability and transition to chaos of regular capsule trains
NASA Astrophysics Data System (ADS)
Bryngelson, Spencer; Freund, Jonathan
2016-11-01
Elastic capsules flowing in sufficiently narrow confines, such as red blood cells in capillaries, are well-known to line up in a single-file train. The stability of such a train in less confined environments, where this organization is not observed, is investigated in a model system that includes full coupling between the viscous flow and suspended elastic capsules. A rich set of linearly amplifying disturbances, including short- and long-time perturbations (non-modal and spectral, respectively) are identified and analyzed. Finite-amplitude transiently amplifying perturbations are shown to provide a mechanism that can bypass slower asymptotic modal linear growth and precipitate the onset of nonlinear dynamics. Direct numerical simulations are used to verify the linear analysis and track the subsequent transition of the regular capsule trains into an apparently chaotic flow. This work was supported in part by the National Science Foundation under Grant No. CBET 13-36972.
Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.
Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi
2016-07-01
Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering.
Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent.
Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo
2011-07-01
Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R(1600), FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.
Spherically Symmetric Space Time with Regular de Sitter Center
NASA Astrophysics Data System (ADS)
Dymnikova, Irina
We formulate the requirements which lead to the existence of a class of globally regular solutions of the minimally coupled GR equations asymptotically de Sitter at the center.
Multimodal manifold-regularized transfer learning for MCI conversion prediction.
Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang
2015-12-01
As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.
Mathematical strategies for filtering complex systems: Regularly spaced sparse observations
Harlim, J. Majda, A.J.
2008-05-01
Real time filtering of noisy turbulent signals through sparse observations on a regularly spaced mesh is a notoriously difficult and important prototype filtering problem. Simpler off-line test criteria are proposed here as guidelines for filter performance for these stiff multi-scale filtering problems in the context of linear stochastic partial differential equations with turbulent solutions. Filtering turbulent solutions of the stochastically forced dissipative advection equation through sparse observations is developed as a stringent test bed for filter performance with sparse regular observations. The standard ensemble transform Kalman filter (ETKF) has poor skill on the test bed and even suffers from filter divergence, surprisingly, at observable times with resonant mean forcing and a decaying energy spectrum in the partially observed signal. Systematic alternative filtering strategies are developed here including the Fourier Domain Kalman Filter (FDKF) and various reduced filters called Strongly Damped Approximate Filter (SDAF), Variance Strongly Damped Approximate Filter (VSDAF), and Reduced Fourier Domain Kalman Filter (RFDKF) which operate only on the primary Fourier modes associated with the sparse observation mesh while nevertheless, incorporating into the approximate filter various features of the interaction with the remaining modes. It is shown below that these much cheaper alternative filters have significant skill on the test bed of turbulent solutions which exceeds ETKF and in various regimes often exceeds FDKF, provided that the approximate filters are guided by the off-line test criteria. The skill of the various approximate filters depends on the energy spectrum of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a precise fashion elucidated here.
New approach to gridding using regularization and estimation theory.
Rosenfeld, Daniel
2002-07-01
When sampling under time-varying gradients, data is acquired over a non-equally spaced grid in k-space. The most computationally efficient method of reconstruction is first to interpolate the data onto a Cartesian grid, enabling the subsequent use of the inverse fast Fourier transform (IFFT). The most commonly used interpolation technique is called gridding, and is comprised of four steps: precompensation, convolution with a Kaiser-Bessel window, IFFT, and postcompensation. Recently, the author introduced a new gridding method called Block Uniform ReSampling (BURS), which is both optimal and efficient. The interpolation coefficients are computed by solving a set of linear equations using singular value decomposition (SVD). BURS requires neither the pre- nor the postcompensation steps, and resamples onto an n x n grid rather than the 2n x 2n matrix required by conventional gridding. This significantly decreases the computational complexity. Several authors have reported that although the BURS algorithm is very accurate, it is also sensitive to noise. As a consequence, even in the presence of a low level of measurement noise, the resulting image is often highly contaminated with noise. In this work, the origin of the noise sensitivity is traced back to the potentially ill-posed matrix inversion performed by BURS. Two approaches to the solution are presented. The first uses regularization theory to stabilize the inversion process. The second formulates the interpolation as an estimation problem, and employs estimation theory for the solution. The new algorithm, called rBURS, contains a regularization parameter, which is used to trade off the accuracy of the result against the signal-to-noise ratio (SNR). The results of the new method are compared with those obtained using conventional gridding via simulations. For the SNR performance of conventional gridding, it is shown that the rBURS algorithm exhibits equal or better accuracy. This is achieved at a decreased
PDE regularization for Bayesian reconstruction of emission tomography
NASA Astrophysics Data System (ADS)
Wang, Zhentian; Zhang, Li; Xing, Yuxiang; Zhao, Ziran
2008-03-01
The aim of the present study is to investigate a type of Bayesian reconstruction which utilizes partial differential equations (PDE) image models as regularization. PDE image models are widely used in image restoration and segmentation. In a PDE model, the image can be viewed as the solution of an evolutionary differential equation. The variation of the image can be regard as a descent of an energy function, which entitles us to use PDE models in Bayesian reconstruction. In this paper, two PDE models called anisotropic diffusion are studied. Both of them have the characteristics of edge-preserving and denoising like the popular median root prior (MRP). We use PDE regularization with an Ordered Subsets accelerated Bayesian one step late (OSL) reconstruction algorithm for emission tomography. The OS accelerated OSL algorithm is more practical than a non-accelerated one. The proposed algorithm is called OSEM-PDE. We validated the OSEM-PDE using a Zubal phantom in numerical experiments with attenuation correction and quantum noise considered, and the results are compared with OSEM and an OS version of MRP (OSEM-MRP) reconstruction. OSEM-PDE shows better results both in bias and variance. The reconstruction images are smoother and have sharper edges, thus are more applicable for post processing such as segmentation. We validate this using a k-means segmentation algorithm. The classic OSEM is not convergent especially in noisy condition. However, in our experiment, OSEM-PDE can benefit from OS acceleration and keep stable and convergent while OSEM-MRP failed to converge.
Models of cuspy triaxial stellar systems - II. Regular orbits
NASA Astrophysics Data System (ADS)
Muzzio, J. C.; Navone, H. D.; Zorzi, A. F.
2013-02-01
In the first paper of this series we used the N-body method to build a dozen cuspy (γ ≃ 1) triaxial models of stellar systems, and we showed that they were highly stable over time intervals of the order of a Hubble time, even though they had very large fractions of chaotic orbits (more than 85 per cent in some cases). The models were grouped in four sets, each one comprising models morphologically resembling E2, E3, E4 and E5 galaxies, respectively. The three models within each set, although different, had the same global properties and were statistically equivalent. In the present paper we use frequency analysis to classify the regular orbits of those models. The bulk of those orbits are short-axis tubes, with a significant fraction of long-axis tubes (LATs) in the E2 models that decreases in the E3 and E4 models to become negligibly small in the E5 models. Most of the LATs in the E2 and E3 models are outer LATs, but the situation reverses in the E4 and E5 models where the few LATs are mainly inner LATs. As could be expected for cuspy models, most of the boxes are resonant orbits, i.e. boxlets. Nevertheless, only the (x, y) fishes of models E3 and E4 amount to about 10 per cent of the regular orbits, with most of the fractions of the other boxlets being of the order of 1 per cent or less.
From numbers to letters: feedback regularization in visual word recognition.
Molinaro, Nicola; Duñabeitia, Jon Andoni; Marìn-Gutièrrez, Alejandro; Carreiras, Manuel
2010-04-01
Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter 'regularization' is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present study investigates the electrophysiological correlates of number-to-letter regularization by means of the masked priming paradigm: target words (MATERIAL) were preceded by fully alphabetic primes (MATERIAL), primes with letter-like numbers (M4T3R14L), or primes with unrelated numbers (M7T6R28L). ERPs revealed three subsequent effects. Around 150 ms the unrelated numbers condition elicited a positive effect, compared to the other two conditions, in the occipital electrodes. Then, target words preceded by primes with numbers elicited a more negative N200 in the same electrodes compared to the fully alphabetic condition. Finally, both alphabetic primes and letter-like numbers elicited a posterior positive component peaking around 260 ms compared to unrelated numbers. Source analysis for each electrophysiological effect revealed a similar early increase of activity in the left occipito-temporal pathway for alphabetic primes and primes with letter-like numbers. Around 200 ms, the orthographic interference due to the numerical values correlated with an increase of activity in parietal areas; finally, a recursive effect in the left occipital cortex was found, reflecting abstract letter activation. These results indicate that direct feedback interaction from word units strongly influences the activation of the letter units at a format-independent abstract level.
Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro
2006-05-01
This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.
Chaos and Regularity in the Doubly Magic Nucleus 208Pb
NASA Astrophysics Data System (ADS)
Dietz, B.; Heusler, A.; Maier, K. H.; Richter, A.; Brown, B. A.
2017-01-01
High-resolution experiments have recently lead to a complete identification (energy, spin, and parity) of 151 nuclear levels up to an excitation energy of Ex=6.20 MeV in 208Pb [Heusler et al., Phys. Rev. C 93, 054321 (2016), 10.1103/PhysRevC.93.054321]. We present a thorough study of the fluctuation properties in the energy spectra of the unprecedented set of nuclear bound states. In a first approach, we group states with the same spin and parity into 14 subspectra, analyze standard statistical measures for short- and long-range correlations, i.e., the nearest-neighbor spacing distribution, the number variance Σ2, the Dyson-Mehta Δ3 statistics, and the novel distribution of the ratios of consecutive spacings of adjacent energy levels in each energy sequence, and then compute their ensemble average. Their comparison with a random matrix ensemble which interpolates between Poisson statistics expected for regular systems and the Gaussian orthogonal ensemble (GOE) predicted for chaotic systems shows that the data are well described by the GOE. In a second approach, following an idea of Rosenzweig and Porter [Phys. Rev. 120, 1698 (1960), 10.1103/PhysRev.120.1698], we consider the complete spectrum composed of the independent subspectra. We analyze their fluctuation properties using the method of Bayesian inference involving a quantitative measure, called the chaoticity parameter f , which also interpolates between Poisson (f =0 ) and GOE statistics (f =1 ). It turns out to be f ≈0.9 . This is so far the closest agreement with a GOE observed in the spectra of bound states in a nucleus. The same analysis is also performed with spectra computed on the basis of shell model calculations with different interactions (surface-delta interaction, Kuo-Brown, Michigan-three-Yukawa). While the simple surface-delta interaction exhibits features typical for nuclear many-body systems with regular dynamics, the other, more realistic interactions yield chaoticity parameters f close
Electrophysiology of regular firing cells in the rat perirhinal cortex.
D'Antuono, M; Biagini, G; Tancredi, V; Avoli, M
2001-01-01
The electrophysiological properties of neurons in the rat perirhinal cortex were analyzed with intracellular recordings in an in vitro slice preparation. Cells included in this study (n = 59) had resting membrane potential (RMP) = -73.9 +/- 8.5 mV (mean +/- SD), action potential amplitude = 95.5 +/- 10.4 mV, input resistance = 36.1 +/- v 15.7 M omega, and time constant = 13.9 +/- 3.4 ms. When filled with neurobiotin (n = 27) they displayed a pyramidal shape with an apical dendrite and extensive basal dendritic tree. Injection of intracellular current pulses revealed: 1) a tetrodotoxin (TTX, 1 microM)-sensitive, inward rectification in the depolarizing direction (n = 6), and 2) a time- and voltage-dependent hyperpolarizing sag that was blocked by extracellular Cs+ (3 mM, n = 5) application. Prolonged (up to 3 s) depolarizing pulses made perirhinal cells discharge regular firing of fast action potentials that diminished over time in frequency and reached a steady level (i.e., adapted). Repetitive firing was followed by an afterhyperpolarization that was decreased, along with firing adaptation, by the Ca(2+)-channel blocker Co2+ (2 mM, n = 6). Action potential broadening became evident during repetitive firing. This behavior, which was more pronounced when larger pulses of depolarizing current were injected (and thus when repetitive firing attained higher rates), was markedly decreased by Co2+ application. Subthreshold membrane oscillations at 5-12 Hz became apparent when cells were depolarized by 10-20 mV from RMP, and action potential clusters appeared with further depolarization. Application of glutamatergic and GABAA receptor antagonists (n = 4), CO2+ (n = 6), or Cs+ (n = 5) did not prevent the occurrence of these oscillations that were abolished by TTX (n = 6). Our results show that pyramidal-like neurons in the perirhinal cortex are regular firing cells with electrophysiological features resembling those of other cortical pyramidal elements. The ability to
Synthesis and Structure - Property Relationships for Regular Multigraft Copolymers
Mays, Jimmy; Uhrig, David; Gido, Samuel; Zhu, Yuqing; Weidisch, Roland; Iatrou, Hermis; Hadjichristidis, Nikos; Hong, Kunlun; Beyer, Frederick; Lach, Ralph
2004-01-01
Multigraft copolymers with polyisoprene backbones and polystyrene branches, having multiple regularly spaced branch points, were synthesized by anionic polymerization high vacuum techniques and controlled chlorosilane linking chemistry. The functionality of the branch points (1, 2 and 4) can be controlled, through the choice of chlorosilane linking agent. The morphologies of the various graft copolymers were investigated by transmission electron microscopy and X-ray scattering. It was concluded that the morphology of these complex architectures is governed by the behavior of the corresponding miktoarm star copolymer associated with each branch point (constituting block copolymer), which follows Milner's theoretical treatment for miktoarm stars. By comparing samples having the same molecular weight backbone and branches but different number of branches it was found that the extent of long range order decreases with increasing number of branch points. The stress-strain properties in tension were investigated for some of these multigraft copolymers. For certain compositions thermoplastic elastomer (TPE) behavior was observed, and in many instances the elongation at break was much higher (2-3X) than that of conventional triblock TPEs.
Maximum entropy regularization of the geomagnetic core field inverse problem
NASA Astrophysics Data System (ADS)
Jackson, Andrew; Constable, Catherine; Gillet, Nicolas
2007-12-01
The maximum entropy technique is an accepted method of image reconstruction when the image is made up of pixels of unknown positive intensity (e.g. a grey-scale image). The problem of reconstructing the magnetic field at the core-mantle boundary from surface data is a problem where the target image, the value of the radial field Br, can be of either sign. We adopt a known extension of the usual maximum entropy method that can be applied to images consisting of pixels of unconstrained sign. We find that we are able to construct images which have high dynamic ranges, but which still have very simple structure. In the spherical harmonic domain they have smoothly decreasing power spectra. It is also noteworthy that these models have far less complex null flux curve topology (lines on which the radial field vanishes) than do models which are quadratically regularized. Problems such as the one addressed are ubiquitous in geophysics, and it is suggested that the applications of the method could be much more widespread than is currently the case.
A continuation approach to regularization for traveltime tomography
Bube, K.P.; Langan, R.T.
1994-12-31
In most geometries in which seismic traveltime tomography is applied the slowness field is not well-determined from traveltimes alone. Nonuniqueness is common. Even when the slowness field is uniquely determined, small changes in the measured traveltimes can lead to large errors in the computed slowness field. A priori information is often available--well-logs, initial rough estimates of the slowness from structural geology, etc. This a priori information can be incorporated into a traveltime inversion algorithm using penalty terms. To further regularize the problem, smoothing constrains can also be incorporated using penalty terms by penalizing derivatives of the slowness field. A major decision to be made is the selection of the weights on the penalty terms, particularly the smoothing penalty weights. The authors use a continuation approach for selecting the smoothing penalty weights. Instead of fixing the smoothing penalty weights, they decrease the smoothing penalty weights in a step-by-step fashion, using the slowness model computed using the previous (larger) weights as the initial slowness model for the next step using the new (smaller) weights. A surprising outcome in synthetic problems is that the model error continues to decrease as they continue to decrease the smoothing penalty weights even after the data error had leveled off at the noise level. This continuation approach can solve synthetic problems more accurately than with fixed smoothing penalty weights, and appears to yield more features of interest in real-data applications of traveltime tomography.
Cryptococcal pleuritis developing in a patient on regular hemodialysis.
Kinjo, K; Satake, S; Ohama, T
2009-09-01
A 64-year-old male on regular hemodialysis who was a human T lymphotrophic virus Type I (HTLV-I) carrier developed cryptococcal pleuritis. The initial manifestations of the present case were a persistent cough and the accumulation of unilateral pleural effusion. A culture of the pleural fluid of the patient grew cryptococcus neoformans and a test for antigens against cryptococcus neoformans in the pleural fluid was also positive, therefore, cryptococcal pleuritis was diagnosed. Pleural cryptococcosis per se is rare and it is extremely rare for a dialysis patient to develop pleural cryptococcosis. To our knowledge, only a few cases of cryptococcal pleuritis have so far been reported in patients on dialysis. Furthermore, an isolated occurrence of cryptococcal pleuritis with no cryptococcal pulmonary parenchymal lesions, as was seen in the present case, is rare because cryptococcal pleuritis is usually associated with underlying cryptococcal pulmonary parenchymal lesions. Patients on chronic dialysis are susceptible to developing pleural effusion from many etiologies such as congestive heart failure, infection (tuberculosis, bacterial, viral, parasitic, fungal), collagen vascular disease, drug reaction, metastasis, or uremia itself. Cryptococcal pleuritis developing in a dialysis patient is extremely rare, but physicians should consider cryptococcal infection as a possible cause when pleural effusion develops in a dialysis patient and no other cause is identified, as occurred in the present case.
Local and global regularized concept factorization for image clustering
NASA Astrophysics Data System (ADS)
Qian, Bin; Tang, Zhenmin; Shen, Xiaobo; Shu, Zhenqiu
2017-01-01
Concept factorization (CF), as a popular matrix factorization technique, has recently attracted increasing attention in image clustering, due to the strong ability of dimension reduction and data representation. Existing CF variants only consider the local structure of data, but ignore the global structure information embedded in data, which is very crucial for data representation. To address the above issue, we propose an improved CF method, namely local and global regularized concept factorization (LGCF), by considering the local and global structures simultaneously. Specifically, the local geometric structure is depicted in LGCF via a hypergraph, which is capable of precisely capturing high-order geometrical information. In addition, to discover the global structure, we establish an unsupervised discriminant criterion, which characterizes the between-class scatter and the total scatter of the data with the help of latent features in LGCF. For the formulated LGCF, a multiplicative update rule is developed, and the convergence is rigorously proved. Extensive experiments on several real image datasets demonstrate the superiority of the proposed method over the state-of-the-art methods in terms of clustering accuracy and mutual information.
Nonergodic Phases in Strongly Disordered Random Regular Graphs.
Altshuler, B L; Cuevas, E; Ioffe, L B; Kravtsov, V E
2016-10-07
We combine numerical diagonalization with semianalytical calculations to prove the existence of the intermediate nonergodic but delocalized phase in the Anderson model on disordered hierarchical lattices. We suggest a new generalized population dynamics that is able to detect the violation of ergodicity of the delocalized states within the Abou-Chakra, Anderson, and Thouless recursive scheme. This result is supplemented by statistics of random wave functions extracted from exact diagonalization of the Anderson model on ensemble of disordered random regular graphs (RRG) of N sites with the connectivity K=2. By extrapolation of the results of both approaches to N→∞ we obtain the fractal dimensions D_{1}(W) and D_{2}(W) as well as the population dynamics exponent D(W) with the accuracy sufficient to claim that they are nontrivial in the broad interval of disorder strength W_{E}
Nonergodic Phases in Strongly Disordered Random Regular Graphs
NASA Astrophysics Data System (ADS)
Altshuler, B. L.; Cuevas, E.; Ioffe, L. B.; Kravtsov, V. E.
2016-10-01
We combine numerical diagonalization with semianalytical calculations to prove the existence of the intermediate nonergodic but delocalized phase in the Anderson model on disordered hierarchical lattices. We suggest a new generalized population dynamics that is able to detect the violation of ergodicity of the delocalized states within the Abou-Chakra, Anderson, and Thouless recursive scheme. This result is supplemented by statistics of random wave functions extracted from exact diagonalization of the Anderson model on ensemble of disordered random regular graphs (RRG) of N sites with the connectivity K =2 . By extrapolation of the results of both approaches to N →∞ we obtain the fractal dimensions D1(W ) and D2(W ) as well as the population dynamics exponent D (W ) with the accuracy sufficient to claim that they are nontrivial in the broad interval of disorder strength WE
Some Characteristics of Regular Fracture-lineament Global Network
NASA Astrophysics Data System (ADS)
Anokhin, Vladimir; Longinos, Biju
2013-04-01
Existence of regular fracture-lineament global network global network (FLGN) (or regmatic network), was known for lands of the Earth in many regions. Authors made more than 20 000 measurements of lineaments and faults azimuths of the lineaments and fractures on geographic, geologic and tectonic maps for number of regions and for all Earth. Later all data files have subjected by the factor analysis. We detect existence FLGN in the Ocean bottom. Statistic relation between fractures and lineaments directions was established. Control of large-scale lineaments by fractures within the competence of the FLGN was based. Predominating strike directions of line elements of FLGN are: 0 - 10˚, 80 - 90˚, 30 - 60˚, 120 - 150˚. FLGN have attribute of fractality. One-direction lines elements of the FLGN alternate with constant step within the competence of defined scale. FLGN was formed under a continuous stress, which exist at least throughout the entire earthcrust thickness and during the time of at least the entire Phanerozoe. This stress was generated by a complex of forces: rotational, pulsating and, possibly, some others in the earthcrust. All of these forces are symmetric to the Earth rotation axis and some of them also to the equator. Rotation and pulsating processes of the Earth are the main factors of these forces and, hence, formation of the fracture- lineament network. FLGN determines the most favorable place for fracturing, formation of fracture-controlled landforms, volcanic and seismic processes (geohazards), fluid flow and ore-formation (minerals).
The effect of regular exercise on cognitive functioning and personality.
Young, R J
1979-09-01
The effect of regular exercise on cognitive functioning and personality was investigated in 32 subjects representing 4 discrete groups based on sex and age. Before and after a 10 week exercise programme of jogging, calisthenics, and recreational activities, a test battery was administered to assess functioning in a number of domains: intelligence (WAIS Digit Symbol and Block Design); brain function (Trail-Making); speed of performance (Crossing-Off); memory and learning (WMS Visual Reproduction and Associate Learning); morale and life satisfaction (Life Satisfaction and Control Ratings); anxiety (MAACL); and depression (MAACL). Improvement was observed on several physiological parameters. ANOVA revealed significant sex and age differences on Digit Symbol and Block Design and age differences on Trail-Making, Crossing-Off, Associate Learning, and anxiety. Regardless of sex and age, significant improvement in performance was observed from pre to post-test on Digit Symbol, Block Design, Trail-Making, Crossing-Off, and on Associate Learning. In addition, an increase on health status rating (p less than .01) and decrease in anxiety were observed from pre to post-test. These data illustrate beneficial effects of exercise on certain measures of cognitive functioning and personality.
Mixed singular-regular boundary conditions in multislab radiation transport
NASA Astrophysics Data System (ADS)
de Abreu, Marcos Pimenta
2004-06-01
This article reports a computational method for approximately solving radiation transport problems with anisotropic scattering defined on multislab domains irradiated from one side with a beam of monoenergetic neutral particles. We assume here that the incident beam may have a monodirectional component and a continuously distributed component in angle. We begin by defining the target problem representing the class of radiation transport problems that we are focused on. We then Chandrasekhar decompose the target problem into an uncollided transport problem with left singular boundary conditions and a diffusive transport problem with regular boundary conditions. We perform an analysis of these problems to derive the exact solution of the uncollided transport problem and a discrete ordinates solution in open form to the diffusive transport problem. These solutions are the basis for the definition of a computational method for approximately solving the target problem. We illustrate the numerical accuracy of our method with three basic problems in radiative transfer and neutron transport, and we conclude this article with a discussion and directions for future work.
How color, regularity, and good Gestalt determine backward masking.
Sayim, Bilge; Manassi, Mauro; Herzog, Michael
2014-06-18
The strength of visual backward masking depends on the stimulus onset asynchrony (SOA) between target and mask. Recently, it was shown that the conjoint spatial layout of target and mask is as crucial as SOA. Particularly, masking strength depends on whether target and mask group with each other. The same is true in crowding where the global spatial layout of the flankers and target-flanker grouping determine crowding strength. Here, we presented a vernier target followed by different flanker configurations at varying SOAs. Similar to crowding, masking of a red vernier target was strongly reduced for arrays of 10 green compared with 10 red flanking lines. Unlike crowding, single green lines flanking the red vernier showed strong masking. Irregularly arranged flanking lines yielded stronger masking than did regularly arranged lines, again similar to crowding. While cuboid flankers reduced crowding compared with single lines, this was not the case in masking. We propose that, first, masking is reduced when the flankers are part of a larger spatial structure. Second, spatial factors counteract color differences between the target and the flankers. Third, complex Gestalts, such as cuboids, seem to need longer processing times to show ungrouping effects as observed in crowding. Strong parallels between masking and crowding suggest similar underlying mechanism; however, temporal factors in masking additionally modulate performance, acting as an additional grouping cue.
Physiological time-series analysis: what does regularity quantify?
NASA Technical Reports Server (NTRS)
Pincus, S. M.; Goldberger, A. L.
1994-01-01
Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity that appears to have potential application to a wide variety of physiological and clinical time-series data. The focus here is to provide a better understanding of ApEn to facilitate its proper utilization, application, and interpretation. After giving the formal mathematical description of ApEn, we provide a multistep description of the algorithm as applied to two contrasting clinical heart rate data sets. We discuss algorithm implementation and interpretation and introduce a general mathematical hypothesis of the dynamics of a wide class of diseases, indicating the utility of ApEn to test this hypothesis. We indicate the relationship of ApEn to variability measures, the Fourier spectrum, and algorithms motivated by study of chaotic dynamics. We discuss further mathematical properties of ApEn, including the choice of input parameters, statistical issues, and modeling considerations, and we conclude with a section on caveats to ensure correct ApEn utilization.
Chemical composition of semi-regular variable giants. III.
NASA Astrophysics Data System (ADS)
Britavskiy, N. E.; Andrievsky, S. M.; Tsymbal, V. V.; Korotin, S. A.; Martin, P.; Andrievska, A. S.
2012-06-01
Aims: We derive the stellar atmosphere parameters and chemical element abundances of four stars classified as semi-regular variables of type "d" (SRd). These stars should presumably belong to the Galactic halo population. Methods: Elemental abundances are derived by applying both local thermodynamical equilibrium and non-local thermodynamical equilibrium analyses to high resolution (R ≈ 80 000) spectra obtained with the CFHT ESPaDOnS spectrograph. We determine the abundances of 27 chemical elements in VW Dra, FT Cnc, VV LMi, and MQ Hya. Results: The stars of our present program have a chemical composition that is inconsistent with their presumable status as metal-deficient halo giants. All studied SRd giants have relative-to-solar elemental abundances that are typical of the thick/thin Galactic disk stars. We find that all objects of this class for which spectroscopic follow up analyses have been completed show a dichotomy in the amplitudes of their photometric variations. Specifically, the disk objects have small amplitudes, while halo SRd stars have much larger amplitudes, which indicates that amplitude is obviously related to the metallicity of the star. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.Figures 2 and 3 are available in electronic form at http://www.aanda.org