Sample records for dimensionally regularized polyakov

  1. Polyakov loop modeling for hot QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukushima, Kenji; Skokov, Vladimir

    Here, we review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.

  2. Polyakov loop modeling for hot QCD

    DOE PAGES

    Fukushima, Kenji; Skokov, Vladimir

    2017-06-19

    Here, we review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.

  3. Phase structure of the Polyakov-quark-meson model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, B.-J.; Pawlowski, J. M.; Wambach, J.

    2007-10-01

    The relation between the deconfinement and chiral phase transition is explored in the framework of a Polyakov-loop-extended two-flavor quark-meson (PQM) model. In this model the Polyakov loop dynamics is represented by a background temporal gauge field which also couples to the quarks. As a novelty an explicit quark chemical potential and N{sub f}-dependence in the Polyakov loop potential is proposed by using renormalization group arguments. The behavior of the Polyakov loop as well as the chiral condensate as function of temperature and quark chemical potential is obtained by minimizing the grand canonical thermodynamic potential of the system. The effect ofmore » the Polyakov loop dynamics on the chiral phase diagram and on several thermodynamic bulk quantities is presented.« less

  4. Polyakov loop correlator in perturbation theory

    DOE PAGES

    Berwein, Matthias; Brambilla, Nora; Petreczky, Péter; ...

    2017-07-25

    We study the Polyakov loop correlator in the weak coupling expansion and show how the perturbative series re-exponentiates into singlet and adjoint contributions. We calculate the order g 7 correction to the Polyakov loop correlator in the short distance limit. We show how the singlet and adjoint free energies arising from the re-exponentiation formula of the Polyakov loop correlator are related to the gauge invariant singlet and octet free energies that can be defined in pNRQCD, namely we find that the two definitions agree at leading order in the multipole expansion, but differ at first order in the quark-antiquark distance.

  5. Polyakov loop correlator in perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berwein, Matthias; Brambilla, Nora; Petreczky, Péter

    We study the Polyakov loop correlator in the weak coupling expansion and show how the perturbative series re-exponentiates into singlet and adjoint contributions. We calculate the order g 7 correction to the Polyakov loop correlator in the short distance limit. We show how the singlet and adjoint free energies arising from the re-exponentiation formula of the Polyakov loop correlator are related to the gauge invariant singlet and octet free energies that can be defined in pNRQCD, namely we find that the two definitions agree at leading order in the multipole expansion, but differ at first order in the quark-antiquark distance.

  6. One-dimensional QCD in thimble regularization

    NASA Astrophysics Data System (ADS)

    Di Renzo, F.; Eruzzi, G.

    2018-01-01

    QCD in 0 +1 dimensions is numerically solved via thimble regularization. In the context of this toy model, a general formalism is presented for S U (N ) theories. The sign problem that the theory displays is a genuine one, stemming from a (quark) chemical potential. Three stationary points are present in the original (real) domain of integration, so that contributions from all the thimbles associated to them are to be taken into account: we show how semiclassical computations can provide hints on the regions of parameter space where this is absolutely crucial. Known analytical results for the chiral condensate and the Polyakov loop are correctly reproduced: this is in particular trivial at high values of the number of flavors Nf. In this regime we notice that the single thimble dominance scenario takes place (the dominant thimble is the one associated to the identity). At low values of Nf computations can be more difficult. It is important to stress that this is not at all a consequence of the original sign problem (not even via the residual phase). The latter is always under control, while accidental, delicate cancelations of contributions coming from different thimbles can be in place in (restricted) regions of the parameter space.

  7. Three flavor Nambu-Jona-Lasinio model with Polyakov loop and competition with nuclear matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciminale, M.; Ippolito, N. D.; Nardulli, G.

    2008-03-01

    We study the phase diagram of the three flavor Polyakov-Nambu-Jona-Lasinio (PNJL) model and, in particular, the interplay between chiral symmetry restoration and deconfinement crossover. We compute chiral condensates, quark densities, and the Polyakov loop at several values of temperature and chemical potential. Moreover we investigate on the role of the Polyakov loop dynamics in the transition from nuclear matter to quark matter.

  8. Polyakov loop fluctuations in the presence of external fields

    NASA Astrophysics Data System (ADS)

    Lo, Pok Man; Szymański, Michał; Redlich, Krzysztof; Sasaki, Chihiro

    2018-06-01

    We study the implications of the spontaneous and explicit Z(3) center symmetry breaking for the Polyakov loop susceptibilities. To this end, ratios of the susceptibilities of the real and imaginary parts, as well as of the modulus of the Polyakov loop are computed within an effective model using a color group integration scheme. We show that the essential features of the lattice QCD results of these ratios can be successfully captured by the effective approach. Furthermore we discuss a novel scaling relation in one of these ratios involving the explicit breaking field, volume, and temperature.

  9. Duality and the Knizhnik-Polyakov-Zamolodchikov relation in Liouville quantum gravity.

    PubMed

    Duplantier, Bertrand; Sheffield, Scott

    2009-04-17

    We present a (mathematically rigorous) probabilistic and geometrical proof of the Knizhnik-Polyakov-Zamolodchikov relation between scaling exponents in a Euclidean planar domain D and in Liouville quantum gravity. It uses the properly regularized quantum area measure dmicro_{gamma}=epsilon;{gamma;{2}/2}e;{gammah_{epsilon}(z)}dz, where dz is the Lebesgue measure on D, gamma is a real parameter, 02 is shown to be related to the quantum measure dmu_{gamma;{'}}, gamma;{'}<2, by the fundamental duality gammagamma;{'}=4.

  10. Six-dimensional regularization of chiral gauge theories

    NASA Astrophysics Data System (ADS)

    Fukaya, Hidenori; Onogi, Tetsuya; Yamamoto, Shota; Yamamura, Ryo

    2017-03-01

    We propose a regularization of four-dimensional chiral gauge theories using six-dimensional Dirac fermions. In our formulation, we consider two different mass terms having domain-wall profiles in the fifth and the sixth directions, respectively. A Weyl fermion appears as a localized mode at the junction of two different domain walls. One domain wall naturally exhibits the Stora-Zumino chain of the anomaly descent equations, starting from the axial U(1) anomaly in six dimensions to the gauge anomaly in four dimensions. Another domain wall implies a similar inflow of the global anomalies. The anomaly-free condition is equivalent to requiring that the axial U(1) anomaly and the parity anomaly are canceled among the six-dimensional Dirac fermions. Since our formulation is based on a massive vector-like fermion determinant, a nonperturbative regularization will be possible on a lattice. Putting the gauge field at the four-dimensional junction and extending it to the bulk using the Yang-Mills gradient flow, as recently proposed by Grabowska and Kaplan, we define the four-dimensional path integral of the target chiral gauge theory.

  11. The ξ/ξ2nd ratio as a test for Effective Polyakov Loop Actions

    NASA Astrophysics Data System (ADS)

    Caselle, Michele; Nada, Alessandro

    2018-03-01

    Effective Polyakov line actions are a powerful tool to study the finite temperature behaviour of lattice gauge theories. They are much simpler to simulate than the original (3+1) dimensional LGTs and are affected by a milder sign problem. However it is not clear to which extent they really capture the rich spectrum of the original theories, a feature which is instead of great importance if one aims to address the sign problem. We propose here a simple way to address this issue based on the so called second moment correlation length ξ2nd. The ratio ξ/ξ2nd between the exponential correlation length and the second moment one is equal to 1 if only a single mass is present in the spectrum, and becomes larger and larger as the complexity of the spectrum increases. Since both ξexp and ξ2nd are easy to measure on the lattice, this is an economic and effective way to keep track of the spectrum of the theory. In this respect we show using both numerical simulation and effective string calculations that this ratio increases dramatically as the temperature decreases. This non-trivial behaviour should be reproduced by the Polyakov loop effective action.

  12. Constituent Quarks and Gluons, Polyakov loop and the Hadron Resonance Gas Model ***

    NASA Astrophysics Data System (ADS)

    Megías, E.; Ruiz Arriola, E.; Salcedo, L. L.

    2014-03-01

    Based on first principle QCD arguments, it has been argued in [1] that the vacuum expectation value of the Polyakov loop can be represented in the hadron resonance gas model. We study this within the Polyakov-constituent quark model by implementing the quantum and local nature of the Polyakov loop [2, 3]. The existence of exotic states in the spectrum is discussed. Presented by E. Megías at the International Nuclear Physics Conference INPC 2013, 2-7 June 2013, Firenze, Italy.Supported by Plan Nacional de Altas Energías (FPA2011-25948), DGI (FIS2011-24149), Junta de Andalucía grant FQM-225, Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), Spanish MINECO's Centro de Excelencia Severo Ochoa Program grant SEV-2012-0234, and the Juan de la Cierva Program.

  13. Hydrodynamics of the Polyakov line in SU(N c) Yang-Mills

    DOE PAGES

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2015-12-08

    We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite N c for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of N c, and are consistent with the string model results at N c = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out ofmore » equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z(N c)bubble using a piece-wise sound wave is suggested.« less

  14. Regularity and dimensional salience in temporal grouping.

    PubMed

    Prince, Jon B; Rice, Tim

    2018-04-30

    How do pitch and duration accents combine to influence the perceived grouping of musical sequences? Sequence context influences the relative importance of these accents; for example, the presence of learned structure in pitch exaggerates the effect of pitch accents at the expense of duration accents despite being irrelevant to the task and not attributable to attention (Prince, 2014b). In the current study, two experiments examined whether the presence of temporal structure has the opposite effect. Experiment 1 tested baseline conditions, in which participants (N = 30) heard sequences with various sizes of either pitch or duration accents, which implied either duple or triple groupings (accent every two or three notes, respectively). Sequences either had regular temporal structure (isochronous) or not (irregular, via using random interonset intervals). Regularity enhanced the effect of duration accents but had negligible influence on pitch accents. The accent sizes that gave the most equivalent ratings across dimension and regularity levels were used in Experiment 2 (N = 33), in which sequences contained both pitch and duration accents that suggested either duple, triple, or neutral groupings. Despite controlling for the baseline effect of regularity by selecting equally effective accent sizes, regularity had additional effects on duration accents, but only for duple groupings. Regularity did not influence the effectiveness of pitch accents when combined with duration accents. These findings offer some support for a dimensional salience hypothesis, which proposes that the presence of temporal structure should foster duration accent effectiveness at the expense of pitch accents. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.

    PubMed

    Kim, Hwi; Hahn, Joonku; Lee, Byoungho

    2009-04-13

    Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.

  16. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  17. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  18. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  19. Probing deconfinement in a chiral effective model with Polyakov loop at imaginary chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morita, Kenji; Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502; Skokov, Vladimir

    2011-10-01

    The phase structure of the two-flavor Polyakov-loop extended Nambu-Jona-Lashinio model is explored at finite temperature and imaginary chemical potential with a particular emphasis on the confinement-deconfinement transition. We point out that the confined phase is characterized by a cos3{mu}{sub I}/T dependence of the chiral condensate on the imaginary chemical potential while in the deconfined phase this dependence is given by cos{mu}{sub I}/T and accompanied by a cusp structure induced by the Z(3) transition. We demonstrate that the phase structure of the model strongly depends on the choice of the Polyakov loop potential U. Furthermore, we find that by changing themore » four fermion coupling constant G{sub s}, the location of the critical end point of the deconfinement transition can be moved into the real chemical potential region. We propose a new parameter characterizing the confinement-deconfinement transition.« less

  20. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  1. Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.

    PubMed

    Andreev, Oleg

    2009-05-29

    We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range.

  2. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  3. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE PAGES

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    2016-10-19

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  5. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  6. ξ /ξ2 n d ratio as a tool to refine effective Polyakov loop models

    NASA Astrophysics Data System (ADS)

    Caselle, Michele; Nada, Alessandro

    2017-10-01

    Effective Polyakov line actions are a powerful tool to study the finite temperature behavior of lattice gauge theories. They are much simpler to simulate than the original lattice model and are affected by a milder sign problem, but it is not clear to which extent they really capture the rich spectrum of the original theories. We propose here a simple way to address this issue based on the so-called second moment correlation length ξ2 n d . The ratio ξ /ξ2 n d between the exponential correlation length and the second moment one is equal to 1 if only a single mass is present in the spectrum, and it becomes larger and larger as the complexity of the spectrum increases. Since both ξ and ξ2 n d are easy to measure on the lattice, this is a cheap and efficient way to keep track of the spectrum of the theory. As an example of the information one can obtain with this tool, we study the behavior of ξ /ξ2 n d in the confining phase of the (D =3 +1 ) SU(2) gauge theory and show that it is compatible with 1 near the deconfinement transition, but it increases dramatically as the temperature decreases. We also show that this increase can be well understood in the framework of an effective string description of the Polyakov loop correlator. This nontrivial behavior should be reproduced by the Polyakov loop effective action; thus, it represents a stringent and challenging test of existing proposals, and it may be used to fine-tune the couplings and to identify the range of validity of the approximations involved in their construction.

  7. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  8. Nucleon properties in the Polyakov quark-meson model

    NASA Astrophysics Data System (ADS)

    Li, Yingying; Hu, Jinniu; Mao, Hong

    2018-05-01

    We study the nucleon as a nontopological soliton in a quark medium as well as in a nucleon medium in terms of the Polyakov quark-meson (PQM) model with two flavors at finite temperature and density. The constituent quark masses evolving with the temperature at various baryon chemical potentials are calculated and the equations of motion are solved according to the proper boundary conditions. The PQM model predicts an increasing size of the nucleon and a reduction of the nucleon mass in both hot environment. However, the phase structure is different from each other in quark and nucleon mediums. There is a crossover in the low-density region and a first-order phase transition in the high-density region in quark medium, whereas there exists a crossover characterized by the overlap of the nucleons in nucleon medium.

  9. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    NASA Astrophysics Data System (ADS)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  10. Lorentz-violating SO(3) model: Discussing unitarity, causality, and 't Hooft-Polyakov monopoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, A.P. Baeta; Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ; Helayeel-Neto, J.A.

    2006-05-15

    In this paper, we extend the analysis of the Lorentz-violating Quantum Electrodynamics to the non-Abelian case: an SO(3) Yang-Mills Lagrangian with the addition of the non-Abelian Chern-Simons-type term. We consider the spontaneous symmetry breaking of the model and inspect its spectrum in order to check if unitarity and causality are respected. An analysis of the topological structure is also carried out and we show that a 't Hooft-Polyakov solution for monopoles is still present.

  11. Polyakov loop and the hadron resonance gas model.

    PubMed

    Megías, E; Arriola, E Ruiz; Salcedo, L L

    2012-10-12

    The Polyakov loop has been used repeatedly as an order parameter in the deconfinement phase transition in QCD. We argue that, in the confined phase, its expectation value can be represented in terms of hadronic states, similarly to the hadron resonance gas model for the pressure. Specifically, L(T)≈1/2[∑(α)g(α)e(-Δ(α)/T), where g(α) are the degeneracies and Δ(α) are the masses of hadrons with exactly one heavy quark (the mass of the heavy quark itself being subtracted). We show that this approximate sum rule gives a fair description of available lattice data with N(f)=2+1 for temperatures in the range 150 MeV

  12. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  13. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  14. Optically programmable encoder based on light propagation in two-dimensional regular nanoplates.

    PubMed

    Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin

    2017-04-07

    We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as '1', than the dark section, defined as '0', along the edge. These '0' and '1' are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.

  15. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  16. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  17. Quark-hadron phase structure of QCD matter from SU(4) Polyakov linear sigma model

    NASA Astrophysics Data System (ADS)

    Diab, Abdel Magied Abdel Aal; Tawfik, Abdel Nasser

    2018-04-01

    The SU(4) Polyakov linear sigma model (PLSM) is extended towards characterizing the chiral condensates, σl, σs and σc of light, strange and charm quarks, respectively and the deconfinement order-parameters φ and φ at finite temperatures and densities (chemical potentials). The PLSM is considered to study the QCD equation of state in the presence of the chiral condensate of charm for different finite chemical potentials. The PLSM results are in a good agreement with the recent lattice QCD simulations. We conclude that, the charm condensate is likely not affected by the QCD phase-transition, where the corresponding critical temperature is greater than that of the light and strange quark condensates.

  18. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  19. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  20. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  1. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou,more » “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].« less

  2. Random packing of regular polygons and star polygons on a flat two-dimensional surface.

    PubMed

    Cieśla, Michał; Barbasz, Jakub

    2014-08-01

    Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.

  3. Regular three-dimensional presentations improve in the identification of surgical liver anatomy - a randomized study.

    PubMed

    Müller-Stich, Beat P; Löb, Nicole; Wald, Diana; Bruckner, Thomas; Meinzer, Hans-Peter; Kadmon, Martina; Büchler, Markus W; Fischer, Lars

    2013-09-25

    Three-dimensional (3D) presentations enhance the understanding of complex anatomical structures. However, it has been shown that two dimensional (2D) "key views" of anatomical structures may suffice in order to improve spatial understanding. The impact of real 3D images (3Dr) visible only with 3D glasses has not been examined yet. Contrary to 3Dr, regular 3D images apply techniques such as shadows and different grades of transparency to create the impression of 3D.This randomized study aimed to define the impact of both the addition of key views to CT images (2D+) and the use of 3Dr on the identification of liver anatomy in comparison with regular 3D presentations (3D). A computer-based teaching module (TM) was used. Medical students were randomized to three groups (2D+ or 3Dr or 3D) and asked to answer 11 anatomical questions and 4 evaluative questions. Both 3D groups had animated models of the human liver available to them which could be moved in all directions. 156 medical students (57.7% female) participated in this randomized trial. Students exposed to 3Dr and 3D performed significantly better than those exposed to 2D+ (p < 0.01, ANOVA). There were no significant differences between 3D and 3Dr and no significant gender differences (p > 0.1, t-test). Students randomized to 3D and 3Dr not only had significantly better results, but they also were significantly faster in answering the 11 anatomical questions when compared to students randomized to 2D+ (p < 0.03, ANOVA). Whether or not "key views" were used had no significant impact on the number of correct answers (p > 0.3, t-test). This randomized trial confirms that regular 3D visualization improve the identification of liver anatomy.

  4. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  5. Critical end point in the presence of a chiral chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Z. -F.; Cloët, I. C.; Lu, Y.

    A class of Polyakov-loop-modified Nambu-Jona-Lasinio models has been used to support a conjecture that numerical simulations of lattice-regularized QCD defined with a chiral chemical potential can provide information about the existence and location of a critical end point in the QCD phase diagram drawn in the plane spanned by baryon chemical potential and temperature. That conjecture is challenged by conflicts between the model results and analyses of the same problem using simulations of lattice-regularized QCD (lQCD) and well-constrained Dyson-Schwinger equation (DSE) studies. We find the conflict is resolved in favor of the lQCD and DSE predictions when both a physicallymore » motivated regularization is employed to suppress the contribution of high-momentum quark modes in the definition of the effective potential connected with the Polyakov-loop-modified Nambu-Jona-Lasinio models and the four-fermion coupling in those models does not react strongly to changes in the mean field that is assumed to mock-up Polyakov-loop dynamics. With the lQCD and DSE predictions thus confirmed, it seems unlikely that simulations of lQCD with mu(5) > 0 can shed any light on a critical end point in the regular QCD phase diagram.« less

  6. A closed expression for the UV-divergent parts of one-loop tensor integrals in dimensional regularization

    NASA Astrophysics Data System (ADS)

    Sulyok, G.

    2017-07-01

    Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.

  7. Optical characteristics of a one-dimensional photonic crystal with an additional regular layer

    NASA Astrophysics Data System (ADS)

    Tolmachev, V. A.; Baldycheva, A. V.; Krutkova, E. Yu.; Perova, T. S.; Berwick, K.

    2009-06-01

    In this paper, the forbidden Photonic Band Gaps (PBGs) of a one-dimensional Photonic Crystal (1D PC) with additional regular layer, t for the constant value of the lattice constant A and at normal incident of light beam were investigated. The additional regular layer was formed from both sides of the high-refractive index layer H. The gap map approach and the Transfer Matrix Method were used for numerical analysis of this structure. The limitation of filling fraction values caused by the presence of t-layer was taking into account during calculations of the Stop-Band (SB) regions for threecomponent PC. The red shift of SBs was observed at the introduction of t-layer to conventional two-component 1D PC with optical contrast of N=3.42/1. The blue edge of the first PBG occupied the intermediate position between the blue edges of SBs regions of conventional PCs with different optical contrast N. This gives the opportunity of tuning the optical contrast of PC by introduction of the additional layer, rather than using the filler, as well as fine tuning of the SB edge. The influence of the number of periods m and the optical contrast N on the properties of SBs was also investigated. The effect of the PBG disappearance in the gap map and in the regions of the PBGs of high order was revealed at certain parameters of the additional layer.

  8. Assembly of the most topologically regular two-dimensional micro and nanocrystals with spherical, conical, and tubular shapes

    NASA Astrophysics Data System (ADS)

    Roshal, D. S.; Konevtsova, O. V.; Myasnikova, A. E.; Rochal, S. B.

    2016-11-01

    We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.

  9. EIT image reconstruction with four dimensional regularization.

    PubMed

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  10. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  11. Meson properties at finite temperature in a three flavor nonlocal chiral quark model with Polyakov loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Contrera, G. A.; CONICET, Rivadavia 1917, 1033 Buenos Aires; Dumm, D. Gomez

    2010-03-01

    We study the finite temperature behavior of light scalar and pseudoscalar meson properties in the context of a three-flavor nonlocal chiral quark model. The model includes mixing with active strangeness degrees of freedom, and takes care of the effect of gauge interactions by coupling the quarks with the Polyakov loop. We analyze the chiral restoration and deconfinement transitions, as well as the temperature dependence of meson masses, mixing angles and decay constants. The critical temperature is found to be T{sub c{approx_equal}}202 MeV, in better agreement with lattice results than the value recently obtained in the local SU(3) PNJL model. Itmore » is seen that above T{sub c} pseudoscalar meson masses get increased, becoming degenerate with the masses of their chiral partners. The temperatures at which this matching occurs depend on the strange quark composition of the corresponding mesons. The topological susceptibility shows a sharp decrease after the chiral transition, signalling the vanishing of the U(1){sub A} anomaly for large temperatures.« less

  12. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  13. Shape and Symmetry Determine Two-Dimensional Melting Transitions of Hard Regular Polygons

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Antonaglia, James; Millan, Jaime A.; Engel, Michael; Glotzer, Sharon C.

    2017-04-01

    The melting transition of two-dimensional systems is a fundamental problem in condensed matter and statistical physics that has advanced significantly through the application of computational resources and algorithms. Two-dimensional systems present the opportunity for novel phases and phase transition scenarios not observed in 3D systems, but these phases depend sensitively on the system and, thus, predicting how any given 2D system will behave remains a challenge. Here, we report a comprehensive simulation study of the phase behavior near the melting transition of all hard regular polygons with 3 ≤n ≤14 vertices using massively parallel Monte Carlo simulations of up to 1 ×106 particles. By investigating this family of shapes, we show that the melting transition depends upon both particle shape and symmetry considerations, which together can predict which of three different melting scenarios will occur for a given n . We show that systems of polygons with as few as seven edges behave like hard disks; they melt continuously from a solid to a hexatic fluid and then undergo a first-order transition from the hexatic phase to the isotropic fluid phase. We show that this behavior, which holds for all 7 ≤n ≤14 , arises from weak entropic forces among the particles. Strong directional entropic forces align polygons with fewer than seven edges and impose local order in the fluid. These forces can enhance or suppress the discontinuous character of the transition depending on whether the local order in the fluid is compatible with the local order in the solid. As a result, systems of triangles, squares, and hexagons exhibit a Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY) predicted continuous transition between isotropic fluid and triatic, tetratic, and hexatic phases, respectively, and a continuous transition from the appropriate x -atic to the solid. In particular, we find that systems of hexagons display continuous two-step KTHNY melting. In contrast, due to

  14. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  15. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  16. Regular transport dynamics produce chaotic travel times

    NASA Astrophysics Data System (ADS)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  17. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  18. A Three-Dimensional Finite Element Analysis of the Stress Distribution Generated by Splinted and Nonsplinted Prostheses in the Rehabilitation of Various Bony Ridges with Regular or Short Morse Taper Implants.

    PubMed

    Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Rodrigues, Renata Cristina; Ribeiro, Ricardo Faria; de Mattos, Maria G

    The aim of this study was to compare the biomechanical performance of splinted or nonsplinted prostheses over short- or regular-length Morse taper implants (5 mm and 11 mm, respectively) in the posterior area of the mandible using finite element analysis. Three-dimensional geometric models of regular implants (Ø 4 × 11 mm) and short implants (Ø 4 × 5 mm) were placed into a simulated model of the left posterior mandible that included the first premolar tooth; all teeth posterior to this tooth had been removed. The four experimental groups were as follows: regular group SP (three regular implants were rehabilitated with splinted prostheses), regular group NSP (three regular implants were rehabilitated with nonsplinted prostheses), short group SP (three short implants were rehabilitated with splinted prostheses), and short group NSP (three short implants were rehabilitated with nonsplinted prostheses). Oblique forces were simulated in molars (365 N) and premolars (200 N). Qualitative and quantitative analyses of the minimum principal stress in bone were performed using ANSYS Workbench software, version 10.0. The use of splinting in the short group reduced the stress to the bone surrounding the implants and tooth. The use of NSP or SP in the regular group resulted in similar stresses. The best indication when there are short implants is to use SP. Use of NSP is feasible only when regular implants are present.

  19. Constrained Low-Rank Learning Using Least Squares-Based Regularization.

    PubMed

    Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong

    2017-12-01

    Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.

  20. Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis

    NASA Astrophysics Data System (ADS)

    Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.

    2005-03-01

    The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..

  1. Choice of regularization in adjoint tomography based on two-dimensional synthetic tests

    NASA Astrophysics Data System (ADS)

    Valentová, Lubica; Gallovič, František; Růžek, Bohuslav; de la Puente, Josep; Moczo, Peter

    2015-08-01

    We present synthetic tests of 2-D adjoint tomography of surface wave traveltimes obtained by the ambient noise cross-correlation analysis across the Czech Republic. The data coverage may be considered perfect for tomography due to the density of the station distribution. Nevertheless, artefacts in the inferred velocity models arising from the data noise may be still observed when weak regularization (Gaussian smoothing of the misfit gradient) or too many iterations are considered. To examine the effect of the regularization and iteration number on the performance of the tomography in more detail we performed extensive synthetic tests. Instead of the typically used (although criticized) checkerboard test, we propose to carry out the tests with two different target models-simple smooth and complex realistic models. The first test reveals the sensitivity of the result on the data noise, while the second helps to analyse the resolving power of the data set. For various noise and Gaussian smoothing levels, we analysed the convergence towards (or divergence from) the target model with increasing number of iterations. Based on the tests we identified the optimal regularization, which we then employed in the inversion of 16 and 20 s Love-wave group traveltimes.

  2. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, J. M.; Liu, Y.; Li, W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less

  3. A regularity condition and temporal asymptotics for chemotaxis-fluid equations

    NASA Astrophysics Data System (ADS)

    Chae, Myeongju; Kang, Kyungkeun; Lee, Jihoon; Lee, Ki-Ahm

    2018-02-01

    We consider two dimensional chemotaxis equations coupled to the Navier-Stokes equations. We present a new localized regularity criterion that is localized in a neighborhood at each point. Secondly, we establish temporal decays of the regular solutions under the assumption that the initial mass of biological cell density is sufficiently small. Both results are improvements of previously known results given in Chae et al (2013 Discrete Continuous Dyn. Syst. A 33 2271-97) and Chae et al (2014 Commun. PDE 39 1205-35)

  4. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    PubMed Central

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  5. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  6. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  7. Three dimensional finite temperature SU(3) gauge theory near the phase transition

    NASA Astrophysics Data System (ADS)

    Bialas, P.; Daniel, L.; Morel, A.; Petersson, B.

    2013-06-01

    We have measured the correlation function of Polyakov loops on the lattice in three dimensional SU(3) gauge theory near its finite temperature phase transition. Using a new and powerful application of finite size scaling, we furthermore extend the measurements of the critical couplings to considerably larger values of the lattice sizes, both in the temperature and space directions, than was investigated earlier in this theory. With the help of these measurements we perform a detailed finite size scaling analysis, showing that for the critical exponents of the two dimensional three state Potts model the mass and the susceptibility fall on unique scaling curves. This strongly supports the expectation that the gauge theory is in the same universality class. The Nambu-Goto string model on the other hand predicts that the exponent ν has the mean field value, which is quite different from the value in the abovementioned Potts model. Using our values of the critical couplings we also determine the continuum limit of the value of the critical temperature in terms of the square root of the zero temperature string tension. This value is very near to the prediction of the Nambu-Goto string model in spite of the different critical behaviour.

  8. Regularity of Solutions of the Nonlinear Sigma Model with Gravitino

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao

    2018-02-01

    We propose a geometric setup to study analytic aspects of a variant of the super symmetric two-dimensional nonlinear sigma model. This functional extends the functional of Dirac-harmonic maps by gravitino fields. The system of Euler-Lagrange equations of the two-dimensional nonlinear sigma model with gravitino is calculated explicitly. The gravitino terms pose additional analytic difficulties to show smoothness of its weak solutions which are overcome using Rivière's regularity theory and Riesz potential theory.

  9. Enumeration of Extended m-Regular Linear Stacks.

    PubMed

    Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2016-12-01

    The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].

  10. Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration

    Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.

  11. Scientific data interpolation with low dimensional manifold model

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  12. Gravitational form factors and decoupling in 2D

    NASA Astrophysics Data System (ADS)

    Ribeiro, Tiago G.; Shapiro, Ilya L.; Zanusso, Omar

    2018-07-01

    We calculate and analyse non-local gravitational form factors induced by quantum matter fields in curved two-dimensional space. The calculations are performed for scalars, spinors and massive vectors by means of the covariant heat kernel method up to the second order in the curvature and confirmed using Feynman diagrams. The analysis of the ultraviolet (UV) limit reveals a generalized "running" form of the Polyakov action for a nonminimal scalar field and the usual Polyakov action in the conformally invariant cases. In the infrared (IR) we establish the gravitational decoupling theorem, which can be seen directly from the form factors or from the physical beta function for fields of any spin.

  13. γ5 in the four-dimensional helicity scheme

    NASA Astrophysics Data System (ADS)

    Gnendiger, C.; Signer, A.

    2018-05-01

    We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.

  14. Scientific data interpolation with low dimensional manifold model

    DOE PAGES

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...

    2017-09-28

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  15. Scientific data interpolation with low dimensional manifold model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  16. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  17. Regularity criterion for solutions of the three-dimensional Cahn-Hilliard-Navier-Stokes equations and associated computations.

    PubMed

    Gibbon, John D; Pal, Nairita; Gupta, Anupam; Pandit, Rahul

    2016-12-01

    We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984)CMPHAY0010-361610.1007/BF01212349]. By taking an L^{∞} norm of the energy of the full binary system, designated as E_{∞}, we have shown that ∫_{0}^{t}E_{∞}(τ)dτ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 128^{3} to 512^{3} collocation points and over the duration of our DNSs confirm that E_{∞} remains bounded as far as our computations allow.

  18. Self-assembly of a binodal metal-organic framework exhibiting a demi-regular lattice.

    PubMed

    Yan, Linghao; Kuang, Guowen; Zhang, Qiushi; Shang, Xuesong; Liu, Pei Nian; Lin, Nian

    2017-10-26

    Designing metal-organic frameworks with new topologies is a long-standing quest because new topologies often accompany new properties and functions. Here we report that 1,3,5-tris[4-(pyridin-4-yl)phenyl]benzene molecules coordinate with Cu atoms to form a two-dimensional framework in which Cu adatoms form a nanometer-scale demi-regular lattice. The lattice is articulated by perfectly arranged twofold and threefold pyridyl-Cu coordination motifs in a ratio of 1 : 6 and features local dodecagonal symmetry. This structure is thermodynamically robust and emerges solely when the molecular density is at a critical value. In comparison, we present three framework structures that consist of semi-regular and regular lattices of Cu atoms self-assembled out of 1,3,5-tris[4-(pyridin-4-yl)phenyl]benzene and trispyridylbenzene molecules. Thus a family of regular, semi-regular and demi-regular lattices can be achieved by Cu-pyridyl coordination.

  19. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  20. Shaping highly regular glass architectures: A lesson from nature

    PubMed Central

    Schoeppler, Vanessa; Reich, Elke; Vacelet, Jean; Rosenthal, Martin; Pacureanu, Alexandra; Rack, Alexander; Zaslansky, Paul; Zolotoyabko, Emil; Zlotnikov, Igor

    2017-01-01

    Demospongiae is a class of marine sponges that mineralize skeletal elements, the glass spicules, made of amorphous silica. The spicules exhibit a diversity of highly regular three-dimensional branched morphologies that are a paradigm example of symmetry in biological systems. Current glass shaping technology requires treatment at high temperatures. In this context, the mechanism by which glass architectures are formed by living organisms remains a mystery. We uncover the principles of spicule morphogenesis. During spicule formation, the process of silica deposition is templated by an organic filament. It is composed of enzymatically active proteins arranged in a mesoscopic hexagonal crystal-like structure. In analogy to synthetic inorganic nanocrystals that show high spatial regularity, we demonstrate that the branching of the filament follows specific crystallographic directions of the protein lattice. In correlation with the symmetry of the lattice, filament branching determines the highly regular morphology of the spicules on the macroscale. PMID:29057327

  1. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  2. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  3. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  4. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  5. Evaluating four-loop conformal Feynman integrals by D-dimensional differential equations

    NASA Astrophysics Data System (ADS)

    Eden, Burkhard; Smirnov, Vladimir A.

    2016-10-01

    We evaluate a four-loop conformal integral, i.e. an integral over four four-dimensional coordinates, by turning to its dimensionally regularized version and applying differential equations for the set of the corresponding 213 master integrals. To solve these linear differential equations we follow the strategy suggested by Henn and switch to a uniformly transcendental basis of master integrals. We find a solution to these equations up to weight eight in terms of multiple polylogarithms. Further, we present an analytical result for the given four-loop conformal integral considered in four-dimensional space-time in terms of single-valued harmonic polylogarithms. As a by-product, we obtain analytical results for all the other 212 master integrals within dimensional regularization, i.e. considered in D dimensions.

  6. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  7. Gene selection for microarray data classification via subspace learning and manifold regularization.

    PubMed

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  8. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  9. Existence and Regularity of Invariant Measures for the Three Dimensional Stochastic Primitive Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatt-Holtz, Nathan, E-mail: negh@vt.edu; Kukavica, Igor, E-mail: kukavica@usc.edu; Ziane, Mohammed, E-mail: ziane@usc.edu

    2014-05-15

    We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.

  10. On the local well-posedness and a Prodi-Serrin-type regularity criterion of the three-dimensional MHD-Boussinesq system without thermal diffusion

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Pei, Yuan

    2017-07-01

    We prove a Prodi-Serrin-type global regularity condition for the three-dimensional Magnetohydrodynamic-Boussinesq system (3D MHD-Boussinesq) without thermal diffusion, in terms of only two velocity and two magnetic components. To the best of our knowledge, this is the first Prodi-Serrin-type criterion for such a 3D hydrodynamic system which is not fully dissipative, and indicates that such an approach may be successful on other systems. In addition, we provide a constructive proof of the local well-posedness of solutions to the fully dissipative 3D MHD-Boussinesq system, and also the fully inviscid, irresistive, non-diffusive MHD-Boussinesq equations. We note that, as a special case, these results include the 3D non-diffusive Boussinesq system and the 3D MHD equations. Moreover, they can be extended without difficulty to include the case of a Coriolis rotational term.

  11. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  12. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

  13. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  14. Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance

    NASA Astrophysics Data System (ADS)

    Rosestolato, M.; Święch, A.

    2017-02-01

    We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.

  15. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  16. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  17. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  18. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  19. Deforming regular black holes

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2017-06-01

    In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.

  20. Slow dynamics and regularization phenomena in ensembles of chaotic neurons

    NASA Astrophysics Data System (ADS)

    Rabinovich, M. I.; Varona, P.; Torres, J. J.; Huerta, R.; Abarbanel, H. D. I.

    1999-02-01

    We have explored the role of calcium concentration dynamics in the generation of chaos and in the regularization of the bursting oscillations using a minimal neural circuit of two coupled model neurons. In regions of the control parameter space where the slowest component, namely the calcium concentration in the endoplasmic reticulum, weakly depends on the other variables, this model is analogous to three dimensional systems as found in [1] or [2]. These are minimal models that describe the fundamental characteristics of the chaotic spiking-bursting behavior observed in real neurons. We have investigated different regimes of cooperative behavior in large assemblies of such units using lattice of non-identical Hindmarsh-Rose neurons electrically coupled with parameters chosen randomly inside the chaotic region. We study the regularization mechanisms in large assemblies and the development of several spatio-temporal patterns as a function of the interconnectivity among nearest neighbors.

  1. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  2. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  3. Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking

    NASA Astrophysics Data System (ADS)

    Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.

    2017-03-01

    Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.

  4. Flexible Lithium-Ion Fiber Battery by the Regular Stacking of Two-Dimensional Titanium Oxide Nanosheets Hybridized with Reduced Graphene Oxide.

    PubMed

    Hoshide, Tatsumasa; Zheng, Yuanchuan; Hou, Junyu; Wang, Zhiqiang; Li, Qingwen; Zhao, Zhigang; Ma, Renzhi; Sasaki, Takayoshi; Geng, Fengxia

    2017-06-14

    Increasing interest has recently been devoted to developing small, rapid, and portable electronic devices; thus, it is becoming critically important to provide matching light and flexible energy-storage systems to power them. To this end, compared with the inevitable drawbacks of being bulky, heavy, and rigid for traditional planar sandwiched structures, linear fiber-shaped lithium-ion batteries (LIB) have become increasingly important owing to their combined superiorities of miniaturization, adaptability, and weavability, the progress of which being heavily dependent on the development of new fiber-shaped electrodes. Here, we report a novel fiber battery electrode based on the most widely used LIB material, titanium oxide, which is processed into two-dimensional nanosheets and assembled into a macroscopic fiber by a scalable wet-spinning process. The titania sheets are regularly stacked and conformally hybridized in situ with reduced graphene oxide (rGO), thereby serving as efficient current collectors, which endows the novel fiber electrode with excellent integrated mechanical properties combined with superior battery performances in terms of linear densities, rate capabilities, and cyclic behaviors. The present study clearly demonstrates a new material-design paradigm toward novel fiber electrodes by assembling metal oxide nanosheets into an ordered macroscopic structure, which would represent the most-promising solution to advanced flexible energy-storage systems.

  5. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  7. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  8. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  9. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  10. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  11. On the Global Regularity for the 3D Magnetohydrodynamics Equations Involving Partial Components

    NASA Astrophysics Data System (ADS)

    Qian, Chenyin

    2018-03-01

    In this paper, we study the regularity criteria of the three-dimensional magnetohydrodynamics system in terms of some components of the velocity field and the magnetic field. With a decomposition of the four nonlinear terms of the system, this result gives an improvement of some corresponding previous works (Yamazaki in J Math Fluid Mech 16: 551-570, 2014; Jia and Zhou in Nonlinear Anal Real World Appl 13: 410-418, 2012).

  12. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories

    NASA Astrophysics Data System (ADS)

    Lorquet, J. C.

    2017-04-01

    energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.

  13. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  14. Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging

    PubMed Central

    Guo, Qijia; Wang, Jie; Chang, Tianying

    2017-01-01

    The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083

  15. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  16. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and

  17. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  18. Topological regularization and self-duality in four-dimensional anti-de Sitter gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miskovic, Olivera; Olea, Rodrigo; Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso, Casilla 4059, Valparaiso

    2009-06-15

    It is shown that the addition of a topological invariant (Gauss-Bonnet term) to the anti-de Sitter gravity action in four dimensions recovers the standard regularization given by the holographic renormalization procedure. This crucial step makes possible the inclusion of an odd parity invariant (Pontryagin term) whose coupling is fixed by demanding an asymptotic (anti) self-dual condition on the Weyl tensor. This argument allows one to find the dual point of the theory where the holographic stress tensor is related to the boundary Cotton tensor as T{sub j}{sup i}={+-}(l{sup 2}/8{pi}G)C{sub j}{sup i}, which has been observed in recent literature in solitonicmore » solutions and hydrodynamic models. A general procedure to generate the counterterm series for anti-de Sitter gravity in any even dimension from the corresponding Euler term is also briefly discussed.« less

  19. Analytic regularization of uniform cubic B-spline deformation fields.

    PubMed

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  20. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  1. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  2. Regularized magnetotelluric inversion based on a minimum support gradient stabilizing functional

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Yu, Peng; Zhang, Luolei; Feng, Shaokong; Utada, Hisashi

    2017-11-01

    Regularization is used to solve the ill-posed problem of magnetotelluric inversion usually by adding a stabilizing functional to the objective functional that allows us to obtain a stable solution. Among a number of possible stabilizing functionals, smoothing constraints are most commonly used, which produce spatially smooth inversion results. However, in some cases, the focused imaging of a sharp electrical boundary is necessary. Although past works have proposed functionals that may be suitable for the imaging of a sharp boundary, such as minimum support and minimum gradient support (MGS) functionals, they involve some difficulties and limitations in practice. In this paper, we propose a minimum support gradient (MSG) stabilizing functional as another possible choice of focusing stabilizer. In this approach, we calculate the gradient of the model stabilizing functional of the minimum support, which affects both the stability and the sharp boundary focus of the inversion. We then apply the discrete weighted matrix form of each stabilizing functional to build a unified form of the objective functional, allowing us to perform a regularized inversion with variety of stabilizing functionals in the same framework. By comparing the one-dimensional and two-dimensional synthetic inversion results obtained using the MSG stabilizing functional and those obtained using other stabilizing functionals, we demonstrate that the MSG results are not only capable of clearly imaging a sharp geoelectrical interface but also quite stable and robust. Overall good performance in terms of both data fitting and model recovery suggests that this stabilizing functional is effective and useful in practical applications.[Figure not available: see fulltext.

  3. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  4. Conservative regularization of compressible dissipationless two-fluid plasmas

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2018-02-01

    This paper extends our earlier approach [cf. A. Thyaharaja, Phys. Plasmas 17, 032503 (2010) and Krishnaswami et al., Phys. Plasmas 23, 022308 (2016)] to obtaining à priori bounds on enstrophy in neutral fluids and ideal magnetohydrodynamics. This results in a far-reaching local, three-dimensional, non-linear, dispersive generalization of a KdV-type regularization to compressible/incompressible dissipationless 2-fluid plasmas and models derived therefrom (quasi-neutral, Hall, and ideal MHD). It involves the introduction of vortical and magnetic "twirl" terms λl 2 ( w l + ( q l / m l ) B ) × ( ∇ × w l ) in the ion/electron velocity equations ( l = i , e ) where w l are vorticities. The cut-off lengths λl and number densities nl must satisfy λl 2 n l = C l , where Cl are constants. A novel feature is that the "flow" current ∑ l q l n l v l in Ampère's law is augmented by a solenoidal "twirl" current ∑ l ∇ × ∇ × λl 2 j flow , l . The resulting equations imply conserved linear and angular momenta and a positive definite swirl energy density E * which includes an enstrophic contribution ∑ l ( 1 / 2 ) λl 2 ρ l wl 2 . It is shown that the equations admit a Hamiltonian-Poisson bracket formulation. Furthermore, singularities in ∇ × B are conservatively regularized by adding ( λB 2 / 2 μ 0 ) ( ∇ × B ) 2 to E * . Finally, it is proved that among regularizations that admit a Hamiltonian formulation and preserve the continuity equations along with the symmetries of the ideal model, the twirl term is unique and minimal in non-linearity and space derivatives of velocities.

  5. Regularity of center-of-pressure trajectories depends on the amount of attention invested in postural control

    PubMed Central

    Donker, Stella F.; Roerdink, Melvyn; Greven, An J.

    2007-01-01

    The influence of attention on the dynamical structure of postural sway was examined in 30 healthy young adults by manipulating the focus of attention. In line with the proposed direct relation between the amount of attention invested in postural control and regularity of center-of-pressure (COP) time series, we hypothesized that: (1) increasing cognitive involvement in postural control (i.e., creating an internal focus by increasing task difficulty through visual deprivation) increases COP regularity, and (2) withdrawing attention from postural control (i.e., creating an external focus by performing a cognitive dual task) decreases COP regularity. We quantified COP dynamics in terms of sample entropy (regularity), standard deviation (variability), sway-path length of the normalized posturogram (curviness), largest Lyapunov exponent (local stability), correlation dimension (dimensionality) and scaling exponent (scaling behavior). Consistent with hypothesis 1, standing with eyes closed significantly increased COP regularity. Furthermore, variability increased and local stability decreased, implying ineffective postural control. Conversely, and in line with hypothesis 2, performing a cognitive dual task while standing with eyes closed led to greater irregularity and smaller variability, suggesting an increase in the “efficiency, or “automaticity” of postural control”. In conclusion, these findings not only indicate that regularity of COP trajectories is positively related to the amount of attention invested in postural control, but also substantiate that in certain situations an increased internal focus may in fact be detrimental to postural control. PMID:17401553

  6. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Swimming in a two-dimensional Brinkman fluid: Computational modeling and regularized solutions

    NASA Astrophysics Data System (ADS)

    Leiderman, Karin; Olson, Sarah D.

    2016-02-01

    The incompressible Brinkman equation represents the homogenized fluid flow past obstacles that comprise a small volume fraction. In nondimensional form, the Brinkman equation can be characterized by a single parameter that represents the friction or resistance due to the obstacles. In this work, we derive an exact fundamental solution for 2D Brinkman flow driven by a regularized point force and describe the numerical method to use it in practice. To test our solution and method, we compare numerical results with an analytic solution of a stationary cylinder in a uniform Brinkman flow. Our method is also compared to asymptotic theory; for an infinite-length, undulating sheet of small amplitude, we recover an increasing swimming speed as the resistance is increased. With this computational framework, we study a model swimmer of finite length and observe an enhancement in propulsion and efficiency for small to moderate resistance. Finally, we study the interaction of two swimmers where attraction does not occur when the initial separation distance is larger than the screening length.

  8. The ARM Best Estimate 2-dimensional Gridded Surface

    DOE Data Explorer

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  9. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  10. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  11. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  12. Energy in higher-dimensional spacetimes

    NASA Astrophysics Data System (ADS)

    Barzegar, Hamed; Chruściel, Piotr T.; Hörzinger, Michael

    2017-12-01

    We derive expressions for the total Hamiltonian energy of gravitating systems in higher-dimensional theories in terms of the Riemann tensor, allowing a cosmological constant Λ ∈R . Our analysis covers asymptotically anti-de Sitter spacetimes, asymptotically flat spacetimes, as well as Kaluza-Klein asymptotically flat spacetimes. We show that the Komar mass equals the Arnowitt-Deser-Misner (ADM) mass in stationary asymptotically flat spacetimes in all dimensions, generalizing the four-dimensional result of Beig, and that this is no longer true with Kaluza-Klein asymptotics. We show that the Hamiltonian mass does not necessarily coincide with the ADM mass in Kaluza-Klein asymptotically flat spacetimes, and that the Witten positivity argument provides a lower bound for the Hamiltonian mass—and not for the ADM mass—in terms of the electric charge. We illustrate our results on the five-dimensional Rasheed metrics, which we study in some detail, pointing out restrictions that arise from the requirement of regularity, which have gone seemingly unnoticed so far in the literature.

  13. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  14. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  15. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  16. Two-dimensional simulation by regularization of free surface viscoplastic flows with Drucker-Prager yield stress and application to granular collapse

    NASA Astrophysics Data System (ADS)

    Lusso, Christelle; Ern, Alexandre; Bouchut, François; Mangeney, Anne; Farin, Maxime; Roche, Olivier

    2017-03-01

    This work is devoted to numerical modeling and simulation of granular flows relevant to geophysical flows such as avalanches and debris flows. We consider an incompressible viscoplastic fluid, described by a rheology with pressure-dependent yield stress, in a 2D setting with a free surface. We implement a regularization method to deal with the singularity of the rheological law, using a mixed finite element approximation of the momentum and incompressibility equations, and an arbitrary Lagrangian Eulerian (ALE) formulation for the displacement of the domain. The free surface is evolved by taking care of its deposition onto the bottom and of preventing it from folding over itself. Several tests are performed to assess the efficiency of our method. The first test is dedicated to verify its accuracy and cost on a one-dimensional simple shear plug flow. On this configuration we setup rules for the choice of the numerical parameters. The second test aims to compare the results of our numerical method to those predicted by an augmented Lagrangian formulation in the case of the collapse and spreading of a granular column over a horizontal rigid bed. Finally we show the reliability of our method by comparing numerical predictions to data from experiments of granular collapse of both trapezoidal and rectangular columns over horizontal rigid or erodible granular bed made of the same material. We compare the evolution of the free surface, the velocity profiles, and the static-flowing interface. The results show the ability of our method to deal numerically with the front behavior of granular collapses over an erodible bed.

  17. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Fate of superconductivity in three-dimensional disordered Luttinger semimetals

    NASA Astrophysics Data System (ADS)

    Mandal, Ipsita

    2018-05-01

    Superconducting instability can occur in three-dimensional quadratic band crossing semimetals only at a finite coupling strength due to the vanishing of density of states at the quadratic band touching point. Since realistic materials are always disordered to some extent, we study the effect of short-ranged-correlated disorder on this superconducting quantum critical point using a controlled loop-expansion applying dimensional regularization. The renormalization group (RG) scheme allows us to determine the RG flows of the various interaction strengths and shows that disorder destroys the superconducting quantum critical point. In fact, the system exhibits a runaway flow to strong disorder.

  20. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    PubMed Central

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  1. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    PubMed

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  2. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  3. On regularization and error estimates for the backward heat conduction problem with time-dependent thermal diffusivity factor

    NASA Astrophysics Data System (ADS)

    Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba

    2018-10-01

    This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.

  4. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  5. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  6. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  7. Behavioral Dimensions in One-Year-Olds and Dimensional Stability in Infancy.

    ERIC Educational Resources Information Center

    Hagekull, Berit; And Others

    1980-01-01

    The dimensional structure of infants' behavioral repertoire was shown to be highly stable over 3 to 15 months of age. Factor analysis of parent questionnaire data produced seven factors named Intensity/Activity, Regularity, Approach-Withdrawal, Sensory Sensitivity, Attentiveness, Manageability and Sensitivity to New Food. An eighth factor,…

  8. LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction

    PubMed Central

    Huang, Li

    2017-01-01

    Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885

  9. Regularization of instabilities in gravity theories

    NASA Astrophysics Data System (ADS)

    Ramazanoǧlu, Fethi M.

    2018-01-01

    We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.

  10. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  11. Implant platform switching: biomechanical approach using two-dimensional finite element analysis.

    PubMed

    Tabata, Lucas Fernando; Assunção, Wirley Gonçalves; Adelino Ricardo Barão, Valentim; de Sousa, Edson Antonio Capello; Gomes, Erica Alves; Delben, Juliana Aparecida

    2010-01-01

    In implant therapy, a peri-implant bone resorption has been noticed mainly in the first year after prosthesis insertion. This bone remodeling can sometimes jeopardize the outcome of the treatment, especially in areas in which short implants are used and also in aesthetic cases. To avoid this occurrence, the use of platform switching (PS) has been used. This study aimed to evaluate the biomechanical concept of PS with relation to stress distribution using two-dimensional finite element analysis. A regular matching diameter connection of abutment-implant (regular platform group [RPG]) and a PS connection (PS group [PSG]) were simulated by 2 two-dimensional finite element models that reproduced a 2-piece implant system with peri-implant bone tissue. A regular implant (prosthetic platform of 4.1 mm) and a wide implant (prosthetic platform of 5.0 mm) were used to represent the RPG and PSG, respectively, in which a regular prosthetic component of 4.1 mm was connected to represent the crown. A load of 100 N was applied on the models using ANSYS software. The RPG spreads the stress over a wider area in the peri-implant bone tissue (159 MPa) and the implant (1610 MPa), whereas the PSG seems to diminish the stress distribution on bone tissue (34 MPa) and implant (649 MPa). Within the limitation of the study, the PS presented better biomechanical behavior in relation to stress distribution on the implant but especially in the bone tissue (80% less). However, in the crown and retention screw, an increase in stress concentration was observed.

  12. Bose-Einstein condensation in chains with power-law hoppings: Exact mapping on the critical behavior in d-dimensional regular lattices.

    PubMed

    Dias, W S; Bertrand, D; Lyra, M L

    2017-06-01

    Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d>4.

  13. Bose-Einstein condensation in chains with power-law hoppings: Exact mapping on the critical behavior in d -dimensional regular lattices

    NASA Astrophysics Data System (ADS)

    Dias, W. S.; Bertrand, D.; Lyra, M. L.

    2017-06-01

    Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d >4 .

  14. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  15. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Regular rate. 779.18 Section 779.18 Labor Regulations... OR SERVICES General Some Basic Definitions § 779.18 Regular rate. As explained in the interpretative... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  16. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Regular rate. 779.18 Section 779.18 Labor Regulations... OR SERVICES General Some Basic Definitions § 779.18 Regular rate. As explained in the interpretative... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  17. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Regular rate. 779.18 Section 779.18 Labor Regulations... OR SERVICES General Some Basic Definitions § 779.18 Regular rate. As explained in the interpretative... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  18. The gravitational potential of axially symmetric bodies from a regularized green kernel

    NASA Astrophysics Data System (ADS)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  19. Global regularity for a family of 3D models of the axi-symmetric Navier–Stokes equations

    NASA Astrophysics Data System (ADS)

    Hou, Thomas Y.; Liu, Pengfei; Wang, Fei

    2018-05-01

    We consider a family of three-dimensional models for the axi-symmetric incompressible Navier–Stokes equations. The models are derived by changing the strength of the convection terms in the axisymmetric Navier–Stokes equations written using a set of transformed variables. We prove the global regularity of the family of models in the case that the strength of convection is slightly stronger than that of the original Navier–Stokes equations, which demonstrates the potential stabilizing effect of convection.

  20. Twistor interpretation of slice regular functions

    NASA Astrophysics Data System (ADS)

    Altavilla, Amedeo

    2018-01-01

    Given a slice regular function f : Ω ⊂ H → H, with Ω ∩ R ≠ ∅, it is possible to lift it to surfaces in the twistor space CP3 of S4 ≃ H ∪ { ∞ } (see Gentili et al., 2014). In this paper we show that the same result is true if one removes the hypothesis Ω ∩ R ≠ ∅ on the domain of the function f. Moreover we find that if a surface S ⊂CP3 contains the image of the twistor lift of a slice regular function, then S has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in CP3 that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in Gr2(C4) which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside Gr2(C4) , showing the role of slice regular functions not defined on R. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.

  1. A review on the multivariate statistical methods for dimensional reduction studies

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei

    2017-05-01

    In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.

  2. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  3. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  4. Lattice QCD analysis for relation between quark confinement and chiral symmetry breaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doi, Takahiro M.; Suganuma, Hideo; Iritani, Takumi

    2016-01-22

    The Polyakov loop and the Dirac modes are connected via a simple analytical relation on the temporally odd-number lattice, where the temporal lattice size is odd with the normal (nontwisted) periodic boundary condition. Using this relation, we investigate the relation between quark confinement and chiral symmetry breaking in QCD. In this paper, we discuss the properties of this analytical relation and numerically investigate each Dirac-mode contribution to the Polyakov loop in both confinement and deconfinement phases at the quenched level. This relation indicates that low-lying Dirac modes have little contribution to the Polyakov loop, and we numerically confirmed this fact.more » From our analysis, it is suggested that there is no direct one-to-one corresponding between quark confinement and chiral symmetry breaking in QCD. Also, in the confinement phase, we numerically find that there is a new “positive/negative symmetry” in the Dirac-mode matrix elements of link-variable operator which appear in the relation and the Polyakov loop becomes zero because of this symmetry. In the deconfinement phase, this symmetry is broken and the Polyakov loop is non-zero.« less

  5. Non-AdS holography in 3-dimensional higher spin gravity — General recipe and example

    NASA Astrophysics Data System (ADS)

    Afshar, H.; Gary, M.; Grumiller, D.; Rashkov, R.; Riegler, M.

    2012-11-01

    We present the general algorithm to establish the classical and quantum asymptotic symmetry algebra for non-AdS higher spin gravity and implement it for the specific example of spin-3 gravity in the non-principal embedding with Lobachevsky ( {{{{H}}^2}× {R}} ) boundary conditions. The asymptotic symmetry algebra for this example consists of a quantum W_3^{(2) } (Polyakov-Bershadsky) and an affine û(1) algebra. We show that unitary representations of the quantum W_3^{(2) } algebra exist only for two values of its central charge, the trivial c = 0 "theory" and the simple c = 1 theory.

  6. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  7. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  8. 75 FR 53966 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  9. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  10. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    PubMed

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  11. Helicity moduli of three-dimensional dilute XY models

    NASA Astrophysics Data System (ADS)

    Garg, Anupam; Pandit, Rahul; Solla, Sara A.; Ebner, C.

    1984-07-01

    The helicity moduli of various dilute, classical XY models on three-dimensional lattices are studied with a view to understanding some aspects of the superfluidity of 4He in Vycor glass. A spinwave calculation is used to obtain the low-temperature helicity modulus of a regularly-diluted XY model. A similar calculation is performed for the randomly bond-diluted and site-diluted XY models in the limit of low dilution. A Monte Carlo simulation is used to obtain the helicity modulus of the randomly bond-diluted XY model over a wide range of temperature and dilution. It is found that the randomly diluted models do agree and the regularly diluted model does not agree with certain experimentally found features of the variation in superfluid fraction with coverage of 4He in Vycor glass.

  12. Chaotic and regular instantons in helical shell models of turbulence

    NASA Astrophysics Data System (ADS)

    De Pietro, Massimo; Mailybaev, Alexei A.; Biferale, Luca

    2017-03-01

    Shell models of turbulence have a finite-time blowup in the inviscid limit, i.e., the enstrophy diverges while the single-shell velocities stay finite. The signature of this blowup is represented by self-similar instantonic structures traveling coherently through the inertial range. These solutions might influence the energy transfer and the anomalous scaling properties empirically observed for the forced and viscous models. In this paper we present a study of the instantonic solutions for a set of four shell models of turbulence based on the exact decomposition of the Navier-Stokes equations in helical eigenstates. We find that depending on the helical structure of each model, instantons are chaotic or regular. Some instantonic solutions tend to recover mirror symmetry for scales small enough. Models that have anomalous scaling develop regular nonchaotic instantons. Conversely, models that have nonanomalous scaling in the stationary regime are those that have chaotic instantons. The direction of the energy carried by each single instanton tends to coincide with the direction of the energy cascade in the stationary regime. Finally, we find that whenever the small-scale stationary statistics is intermittent, the instanton is less steep than the dimensional Kolmogorov scaling, independently of whether or not it is chaotic. Our findings further support the idea that instantons might be crucial to describe some aspects of the multiscale anomalous statistics of shell models.

  13. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  14. Microscopic Spin Model for the STOCK Market with Attractor Bubbling on Regular and Small-World Lattices

    NASA Astrophysics Data System (ADS)

    Krawiecki, A.

    A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.

  15. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  16. Investigating Various Application Areas of Three-Dimensional Virtual Worlds for Higher Education

    ERIC Educational Resources Information Center

    Ghanbarzadeh, Reza; Ghapanchi, Amir Hossein

    2018-01-01

    Three-dimensional virtual world (3DVW) have been adopted extensively in the education sector worldwide, and there has been remarkable growth in the application of these environments for distance learning. A wide variety of universities and educational organizations across the world have utilized this technology for their regular learning and…

  17. Accretion onto some well-known regular black holes

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  18. Regular treatment with formoterol versus regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Lasserson, Toby J

    2014-01-01

    Background An increase in serious adverse events with both regular formoterol and regular salmeterol in chronic asthma has been demonstrated in previous Cochrane reviews. Objectives We set out to compare the risks of mortality and non-fatal serious adverse events in trials which have randomised patients with chronic asthma to regular formoterol versus regular salmeterol. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked manufacturers’ websites of clinical trial registers for unpublished trial data and also checked Food and Drug Administration (FDA) submissions in relation to formoterol and salmeterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel-design clinical trials on patients of any age and with any severity of asthma if they randomised patients to treatment with regular formoterol versus regular salmeterol (without randomised inhaled corticosteroids), and were of at least 12 weeks’ duration. Data collection and analysis Two authors independently selected trials for inclusion in the review and extracted outcome data. We sought unpublished data on mortality and serious adverse events from the sponsors and authors. Main results The review included four studies (involving 1116 adults and 156 children). All studies were open label and recruited patients who were already taking inhaled corticosteroids for their asthma, and all studies contributed data on serious adverse events. All studies compared formoterol 12 μg versus salmeterol 50 μg twice daily. The adult studies were all comparing Foradil Aerolizer with Serevent Diskus, and the children’s study compared Oxis Turbohaler to Serevent Accuhaler. There was only one death in an adult (which was unrelated to asthma) and none in children, and there were no significant differences in non-fatal serious adverse events comparing formoterol to salmeterol in adults (Peto odds ratio (OR) 0.77; 95

  19. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Purpose of regular fellowships. 61.3 Section 61.3..., TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships are... sciences and communication of information. (b) Special scientific projects for the compilation of existing...

  20. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    PubMed

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  1. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  2. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  3. Slice regular functions of several Clifford variables

    NASA Astrophysics Data System (ADS)

    Ghiloni, R.; Perotti, A.

    2012-11-01

    We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.

  4. Regular Decompositions for H(div) Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Vassilevski, Panayot

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  5. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  6. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  7. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  8. Model-based Clustering of High-Dimensional Data in Astrophysics

    NASA Astrophysics Data System (ADS)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  9. Three-dimensional electron diffraction of plant light-harvesting complex

    PubMed Central

    Wang, Da Neng; Kühlbrandt, Werner

    1992-01-01

    Electron diffraction patterns of two-dimensional crystals of light-harvesting chlorophyll a/b-protein complex (LHC-II) from photosynthetic membranes of pea chloroplasts, tilted at different angles up to 60°, were collected to 3.2 Å resolution at -125°C. The reflection intensities were merged into a three-dimensional data set. The Friedel R-factor and the merging R-factor were 21.8 and 27.6%, respectively. Specimen flatness and crystal size were critical for recording electron diffraction patterns from crystals at high tilts. The principal sources of experimental error were attributed to limitations of the number of unit cells contributing to an electron diffraction pattern, and to the critical electron dose. The distribution of strong diffraction spots indicated that the three-dimensional structure of LHC-II is less regular than that of other known membrane proteins and is not dominated by a particular feature of secondary structure. ImagesFIGURE 1FIGURE 2 PMID:19431817

  10. On the regularized fermionic projector of the vacuum

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  11. Network-based regularization for matched case-control analysis of high-dimensional DNA methylation data.

    PubMed

    Sun, Hokeun; Wang, Shuang

    2013-05-30

    The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  13. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Regular membership. 725.3 Section 725.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit...

  14. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  15. On regularizing the MCTDH equations of motion

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Dieter; Wang, Haobin

    2018-03-01

    The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.

  16. A quadratically regularized functional canonical correlation analysis for identifying the global structure of pleiotropy with NGS data

    PubMed Central

    Zhu, Yun; Fan, Ruzong; Xiong, Momiao

    2017-01-01

    Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore correlation information of genetic variants, effectively reduce data dimensions, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new statistic method referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the ten competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and ten other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the ten other statistics. PMID:29040274

  17. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Statistical investigation of avalanches of three-dimensional small-world networks and their boundary and bulk cross-sections

    NASA Astrophysics Data System (ADS)

    Najafi, M. N.; Dashti-Naserabadi, H.

    2018-03-01

    In many situations we are interested in the propagation of energy in some portions of a three-dimensional system with dilute long-range links. In this paper, a sandpile model is defined on the three-dimensional small-world network with real dissipative boundaries and the energy propagation is studied in three dimensions as well as the two-dimensional cross-sections. Two types of cross-sections are defined in the system, one in the bulk and another in the system boundary. The motivation of this is to make clear how the statistics of the avalanches in the bulk cross-section tend to the statistics of the dissipative avalanches, defined in the boundaries as the concentration of long-range links (α ) increases. This trend is numerically shown to be a power law in a manner described in the paper. Two regimes of α are considered in this work. For sufficiently small α s the dominant behavior of the system is just like that of the regular BTW, whereas for the intermediate values the behavior is nontrivial with some exponents that are reported in the paper. It is shown that the spatial extent up to which the statistics is similar to the regular BTW model scales with α just like the dissipative BTW model with the dissipation factor (mass in the corresponding ghost model) m2˜α for the three-dimensional system as well as its two-dimensional cross-sections.

  19. Self-assembled one dimensional functionalized metal-organic nanotubes (MONTs) for proton conduction.

    PubMed

    Panda, Tamas; Kundu, Tanay; Banerjee, Rahul

    2012-06-04

    Two self-assembled isostructural functionalized metal-organic nanotubes have been synthesized using 5-triazole isophthalic acid (5-TIA) with In(III) and Cd(II). In- and Cd-5TIA possess one-dimensional (1D) nanotubular architecture and show proton conductivity along regular 1D channels, measured as 5.35 × 10(-5) and 3.61 × 10(-3) S cm(-1) respectively.

  20. High-speed manufacturing of highly regular femtosecond laser-induced periodic surface structures: physical origin of regularity.

    PubMed

    Gnilitskyi, Iaroslav; Derrien, Thibault J-Y; Levy, Yoann; Bulgakova, Nadezhda M; Mocek, Tomáš; Orazi, Leonardo

    2017-08-16

    Highly regular laser-induced periodic surface structures (HR-LIPSS) have been fabricated on surfaces of Mo, steel alloy and Ti at a record processing speed on large areas and with a record regularity in the obtained sub-wavelength structures. The physical mechanisms governing LIPSS regularity are identified and linked with the decay length (i.e. the mean free path) of the excited surface electromagnetic waves (SEWs). The dispersion of the LIPSS orientation angle well correlates with the SEWs decay length: the shorter this length, the more regular are the LIPSS. A material dependent criterion for obtaining HR-LIPSS is proposed for a large variety of metallic materials. It has been found that decreasing the spot size close to the SEW decay length is a key for covering several cm 2 of material surface by HR-LIPSS in a few seconds. Theoretical predictions suggest that reducing the laser wavelength can provide the possibility of HR-LIPSS production on principally any metal. This new achievement in the unprecedented level of control over the laser-induced periodic structure formation makes this laser-writing technology to be flexible, robust and, hence, highly competitive for advanced industrial applications based on surface nanostructuring.

  1. Training Regular Education Personnel To Be Special Education Consultants to Other Regular Education Personnel in Rural Settings.

    ERIC Educational Resources Information Center

    McIntosh, Dean K.; Raymond, Gail I.

    The Program for Exceptional Children of the University of South Carolina developed a project to address the need for an improved service delivery model for handicapped students in rural South Carolina. The project trained regular elementary teachers at the master's degree level to function as consultants to other regular classroom teachers with…

  2. 20 CFR 226.14 - Employee regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  3. Critical behavior of the XY-rotor model on regular and small-world networks

    NASA Astrophysics Data System (ADS)

    De Nigris, Sarah; Leoncini, Xavier

    2013-07-01

    We study the XY rotors model on small networks whose number of links scales with the system size Nlinks˜Nγ, where 1≤γ≤2. We first focus on regular one-dimensional rings in the microcanonical ensemble. For γ<1.5 the model behaves like a short-range one and no phase transition occurs. For γ>1.5, the system equilibrium properties are found to be identical to the mean field, which displays a second-order phase transition at a critical energy density ɛ=E/N,ɛc=0.75. Moreover, for γc≃1.5 we find that a nontrivial state emerges, characterized by an infinite susceptibility. We then consider small-world networks, using the Watts-Strogatz mechanism on the regular networks parametrized by γ. We first analyze the topology and find that the small-world regime appears for rewiring probabilities which scale as pSW∝1/Nγ. Then considering the XY-rotors model on these networks, we find that a second-order phase transition occurs at a critical energy ɛc which logarithmically depends on the topological parameters p and γ. We also define a critical probability pMF, corresponding to the probability beyond which the mean field is quantitatively recovered, and we analyze its dependence on γ.

  4. 39 CFR 6.1 - Regular meetings, annual meeting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  5. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  6. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  7. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  8. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  9. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  10. Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2016-01-01

    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…

  11. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  12. Equation of state of the one- and three-dimensional Bose-Bose gases

    NASA Astrophysics Data System (ADS)

    Chiquillo, Emerson

    2018-06-01

    We calculate the equation of state of Bose-Bose gases in one and three dimensions in the framework of an effective quantum field theory. The beyond-mean-field approximation at zero temperature and the one-loop finite-temperature results are obtained performing functional integration on a local effective action. The ultraviolet divergent zero-point quantum fluctuations are removed by means of dimensional regularization. We derive the nonlinear Schrödinger equation to describe one- and three-dimensional Bose-Bose mixtures and solve it analytically in the one-dimensional scenario. This equation supports self-trapped brightlike solitonic droplets and self-trapped darklike solitons. At low temperature, we also find that the pressure and the number of particles of symmetric quantum droplets have a nontrivial dependence on the chemical potential and the difference between the intra- and the interspecies coupling constants.

  13. Optimal behaviour can violate the principle of regularity.

    PubMed

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  14. 29 CFR 784.16 - “Regular rate.”

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false âRegular rate.â 784.16 Section 784.16 Labor Regulations... FISHING AND OPERATIONS ON AQUATIC PRODUCTS General Some Basic Definitions § 784.16 “Regular rate.” As... in the Act not less than one and one-half times their regular rates of pay. Section 7(e) of the Act...

  15. 29 CFR 784.16 - “Regular rate.”

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false âRegular rate.â 784.16 Section 784.16 Labor Regulations... FISHING AND OPERATIONS ON AQUATIC PRODUCTS General Some Basic Definitions § 784.16 “Regular rate.” As... in the Act not less than one and one-half times their regular rates of pay. Section 7(e) of the Act...

  16. 29 CFR 784.16 - “Regular rate.”

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false âRegular rate.â 784.16 Section 784.16 Labor Regulations... FISHING AND OPERATIONS ON AQUATIC PRODUCTS General Some Basic Definitions § 784.16 “Regular rate.” As... in the Act not less than one and one-half times their regular rates of pay. Section 7(e) of the Act...

  17. Diffusion of a Concentrated Lattice Gas in a Regular Comb Structure

    NASA Astrophysics Data System (ADS)

    Garcia, Paul; Wentworth, Christopher

    2008-10-01

    Understanding diffusion in constrained geometries is of interest in a variety of contexts as varied as mass transport in disordered solids, such as a percolation cluster, or intercellular transport of water molecules in biological tissue. In this investigation we explore diffusion in a very simple constrained geometry: a comb-like structure involving a one-dimensional backbone of lattice sites with regularly spaced teeth of fixed length. The model considered assumes a fixed concentration of diffusing particles can hop to nearest-neighbor sites only, and they do not interact with each other except that double occupancy is not allowed. The system is simulated using a Monte Carlo simulation procedure. The mean-square displacement of a tagged particle is calculated from the simulation as a function of time. The simulation shows normal diffusive behavior after a period of anomalous diffusion that increases as the tooth size increases.

  18. Optimal behaviour can violate the principle of regularity

    PubMed Central

    Trimmer, Pete C.

    2013-01-01

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory—based on axioms, including transitivity, regularity and the independence of irrelevant alternatives—is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision. PMID:23740781

  19. For numerical differentiation, dimensionality can be a blessing!

    NASA Astrophysics Data System (ADS)

    Anderssen, Robert S.; Hegland, Markus

    Finite difference methods, such as the mid-point rule, have been applied successfully to the numerical solution of ordinary and partial differential equations. If such formulas are applied to observational data, in order to determine derivatives, the results can be disastrous. The reason for this is that measurement errors, and even rounding errors in computer approximations, are strongly amplified in the differentiation process, especially if small step-sizes are chosen and higher derivatives are required. A number of authors have examined the use of various forms of averaging which allows the stable computation of low order derivatives from observational data. The size of the averaging set acts like a regularization parameter and has to be chosen as a function of the grid size h. In this paper, it is initially shown how first (and higher) order single-variate numerical differentiation of higher dimensional observational data can be stabilized with a reduced loss of accuracy than occurs for the corresponding differentiation of one-dimensional data. The result is then extended to the multivariate differentiation of higher dimensional data. The nature of the trade-off between convergence and stability is explicitly characterized, and the complexity of various implementations is examined.

  20. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  2. Consistent Partial Least Squares Path Modeling via Regularization

    PubMed Central

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491

  3. Consistent Partial Least Squares Path Modeling via Regularization.

    PubMed

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  4. Regularity results for the minimum time function with Hörmander vector fields

    NASA Astrophysics Data System (ADS)

    Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa

    2018-03-01

    In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.

  5. Regular Patterns in Cerebellar Purkinje Cell Simple Spike Trains

    PubMed Central

    Shin, Soon-Lim; Hoebeek, Freek E.; Schonewille, Martijn; De Zeeuw, Chris I.; Aertsen, Ad; De Schutter, Erik

    2007-01-01

    Background Cerebellar Purkinje cells (PC) in vivo are commonly reported to generate irregular spike trains, documented by high coefficients of variation of interspike-intervals (ISI). In strong contrast, they fire very regularly in the in vitro slice preparation. We studied the nature of this difference in firing properties by focusing on short-term variability and its dependence on behavioral state. Methodology/Principal Findings Using an analysis based on CV2 values, we could isolate precise regular spiking patterns, lasting up to hundreds of milliseconds, in PC simple spike trains recorded in both anesthetized and awake rodents. Regular spike patterns, defined by low variability of successive ISIs, comprised over half of the spikes, showed a wide range of mean ISIs, and were affected by behavioral state and tactile stimulation. Interestingly, regular patterns often coincided in nearby Purkinje cells without precise synchronization of individual spikes. Regular patterns exclusively appeared during the up state of the PC membrane potential, while single ISIs occurred both during up and down states. Possible functional consequences of regular spike patterns were investigated by modeling the synaptic conductance in neurons of the deep cerebellar nuclei (DCN). Simulations showed that these regular patterns caused epochs of relatively constant synaptic conductance in DCN neurons. Conclusions/Significance Our findings indicate that the apparent irregularity in cerebellar PC simple spike trains in vivo is most likely caused by mixing of different regular spike patterns, separated by single long intervals, over time. We propose that PCs may signal information, at least in part, in regular spike patterns to downstream DCN neurons. PMID:17534435

  6. Alternative dimensional reduction via the density matrix

    NASA Astrophysics Data System (ADS)

    de Carvalho, C. A.; Cornwall, J. M.; da Silva, A. J.

    2001-07-01

    We give graphical rules, based on earlier work for the functional Schrödinger equation, for constructing the density matrix for scalar and gauge fields in equilibrium at finite temperature T. More useful is a dimensionally reduced effective action (DREA) constructed from the density matrix by further functional integration over the arguments of the density matrix coupled to a source. The DREA is an effective action in one less dimension which may be computed order by order in perturbation theory or by dressed-loop expansions; it encodes all thermal matrix elements. We term the DREA procedure alternative dimensional reduction, to distinguish it from the conventional dimensionally reduced field theory (DRFT) which applies at infinite T. The DREA is useful because it gives a dimensionally reduced theory usable at any T including infinity, where it yields the DRFT, and because it does not and cannot have certain spurious infinities which sometimes occur in the density matrix itself or the conventional DRFT; these come from ln T factors at infinite temperature. The DREA can be constructed to all orders (in principle) and the only regularizations needed are those which control the ultraviolet behavior of the zero-T theory. An example of spurious divergences in the DRFT occurs in d=2+1φ4 theory dimensionally reduced to d=2. We study this theory and show that the rules for the DREA replace these ``wrong'' divergences in physical parameters by calculable powers of ln T; we also compute the phase transition temperature of this φ4 theory in one-loop order. Our density-matrix construction is equivalent to a construction of the Landau-Ginzburg ``coarse-grained free energy'' from a microscopic Hamiltonian.

  7. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  8. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Cable television system regular monitoring. 76... system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by...

  9. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Cable television system regular monitoring. 76... system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by...

  10. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Cable television system regular monitoring. 76... system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by...

  11. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  12. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Mathematical Model and Simulation of Particle Flow around Choanoflagellates Using the Method of Regularized Stokeslets

    NASA Astrophysics Data System (ADS)

    Nararidh, Niti

    2013-11-01

    Choanoflagellates are unicellular organisms whose intriguing morphology includes a set of collars/microvilli emanating from the cell body, surrounding the beating flagellum. We investigated the role of the microvilli in the feeding and swimming behavior of the organism using a three-dimensional model based on the method of regularized Stokeslets. This model allows us to examine the velocity generated around the feeding organism tethered in place, as well as to predict the paths of surrounding free flowing particles. In particular, we can depict the effective capture of nutritional particles and bacteria in the fluid, showing the hydrodynamic cooperation between the cell, flagellum, and microvilli of the organism. Funding Source: Murchison Undergraduate Research Fellowship.

  14. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  15. Regular sun exposure benefits health.

    PubMed

    van der Rhee, H J; de Vries, E; Coebergh, J W

    2016-12-01

    Since it was discovered that UV radiation was the main environmental cause of skin cancer, primary prevention programs have been started. These programs advise to avoid exposure to sunlight. However, the question arises whether sun-shunning behaviour might have an effect on general health. During the last decades new favourable associations between sunlight and disease have been discovered. There is growing observational and experimental evidence that regular exposure to sunlight contributes to the prevention of colon-, breast-, prostate cancer, non-Hodgkin lymphoma, multiple sclerosis, hypertension and diabetes. Initially, these beneficial effects were ascribed to vitamin D. Recently it became evident that immunomodulation, the formation of nitric oxide, melatonin, serotonin, and the effect of (sun)light on circadian clocks, are involved as well. In Europe (above 50 degrees north latitude), the risk of skin cancer (particularly melanoma) is mainly caused by an intermittent pattern of exposure, while regular exposure confers a relatively low risk. The available data on the negative and positive effects of sun exposure are discussed. Considering these data we hypothesize that regular sun exposure benefits health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  17. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    PubMed

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  18. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  19. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  20. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING

    PubMed Central

    Liu, Meizhu; Vemuri, Baba C.

    2011-01-01

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  1. Application of real rock pore-threat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakibul, M.; Sarker, H.; McIntyre, D.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable

  2. Application of real rock pore-throat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, M.R.; McIntyre, D.; Ferer, M.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable

  3. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  4. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  5. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  6. Regularized Stokeslet representations for the flow around a human sperm

    NASA Astrophysics Data System (ADS)

    Ishimoto, Kenta; Gadelha, Hermes; Gaffney, Eamonn; Smith, David; Kirkman-Brown, Jackson

    2017-11-01

    The sperm flagellum does not simply push the sperm. We have established a new theoretical scheme for the dimensional reduction of swimming sperm dynamics, via high-frame-rate digital microscopy of a swimming human sperm cell. This has allowed the reconstruction of the flagellar waveform as a limit cycle in a phase space of PCA modes. With this waveform, boundary element numerical simulation has successfully captured fine-scale sperm swimming trajectories. Further analyses on the flow field around the cell has also demonstrated a pusher-type time-averaged flow, though the instantaneous flow field can temporarily vary in a more complicated manner - even pulling the sperm. Applying PCA to the flow field, we have further found that a small number of PCA modes explain the temporal patterns of the flow, whose core features are well approximated by a few regularized Stokeslets. Such representations provide a methodology for coarse-graining the time-dependent flow around a human sperm and other flagellar microorganisms for use in developing population level models that retain individual cell dynamics.

  7. Reliability and dimensionality of judgments of visually textured materials.

    PubMed

    Cho, R Y; Yang, V; Hallett, P E

    2000-05-01

    We extended perceptual studies of the Brodatz set of textured materials. In the experiments, texture perception for different texture sets, viewing distances, or lighting intensities was examined. Subjects compared one pair of textures at a time. The main task was to rapidly rate all of the texture pairs on a number scale for their overall dissimilarities first and then for their dissimilarities according to six specified attributes (e.g., texture contrast). The implied dimensionality of perceptual texture space was usually at least four, rather than three. All six attributes proved to be useful predictors of overall dissimilarity, especially coarseness and regularity. The novel attribute texture lightness, an assessment of mean surface reflectance, was important when viewing conditions were wide-ranging. We were impressed by the general validity of texture judgments across subject, texture set, and comfortable viewing distances or lighting intensities. The attributes are nonorthogonal directions in four-dimensional perceptual space and are probably not narrow linear axes. In a supplementary experiment, we studied a completely different task: identifying textures from a distance. The dimensionality for this more refined task is similar to that for rating judgments, so our findings may have general application.

  8. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  9. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  10. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  11. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  12. Generalization Analysis of Fredholm Kernel Regularized Classifiers.

    PubMed

    Gong, Tieliang; Xu, Zongben; Chen, Hong

    2017-07-01

    Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.

  13. Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfrum, Matthias; Omel'chenko, Oleh E.; Sieber, Jan

    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doublingmore » cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.« less

  14. A space-frequency multiplicative regularization for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.

  15. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  16. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  17. Exploring the spectrum of regularized bosonic string theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambjørn, J., E-mail: ambjorn@nbi.dk; Makeenko, Y., E-mail: makeenko@nbi.dk

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  18. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  19. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  20. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  1. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    ERIC Educational Resources Information Center

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  2. 32 CFR 724.211 - Regularity of government affairs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Regularity of government affairs. 724.211 Section 724.211 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY PERSONNEL NAVAL DISCHARGE REVIEW BOARD Authority/Policy for Departmental Discharge Review § 724.211 Regularity of government...

  3. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data

  4. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Disability for any regular employment, defined... RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is disabled...

  5. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Disability for any regular employment, defined... RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is disabled...

  6. Regular network model for the sea ice-albedo feedback in the Arctic.

    PubMed

    Müller-Stoffels, Marc; Wackerbauer, Renate

    2011-03-01

    The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.

  7. Fabrication and Characterization of Three Dimensional Photonic Crystals Generated by Multibeam Interference Lithography

    DTIC Science & Technology

    2009-01-01

    and J. A. Lewis, "Microperiodic structures - Direct writing of three-dimensional webs ," Nature, vol. 428, pp. 386-386, 2004. [9] M. Campbell, D. N...of Applied Physics Part 1-Regular Papers Brief Communications & Review Papers , vol. 44, pp. 6355-6367, 2005. [75] P. Cloetens, W. Ludwig, J... paper screen on the sample holder and marking the beam position. If the central beam is properly aligned, the spot on the screen remains at the

  8. Confronting effective models for deconfinement in dense quark matter with lattice data

    NASA Astrophysics Data System (ADS)

    Andersen, Jens O.; Brauner, Tomáš; Naylor, William R.

    2015-12-01

    Ab initio numerical simulations of the thermodynamics of dense quark matter remain a challenge. Apart from the infamous sign problem, lattice methods have to deal with finite volume and discretization effects as well as with the necessity to introduce sources for symmetry-breaking order parameters. We study these artifacts in the Polyakov-loop-extended Nambu-Jona-Lasinio (PNJL) model and compare its predictions to existing lattice data for cold and dense two-color matter with two flavors of Wilson quarks. To achieve even qualitative agreement with lattice data requires the introduction of two novel elements in the model: (i) explicit chiral symmetry breaking in the effective contact four-fermion interaction, referred to as the chiral twist, and (ii) renormalization of the Polyakov loop. The feedback of the dense medium to the gauge sector is modeled by a chemical-potential-dependent scale in the Polyakov-loop potential. In contrast to previously used analytical Ansätze, we determine its dependence on the chemical potential from lattice data for the expectation value of the Polyakov loop. Finally, we propose adding a two-derivative operator to our effective model. This term acts as an additional source of explicit chiral symmetry breaking, mimicking an analogous term in the lattice Wilson action.

  9. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  10. Regular and platform switching: bone stress analysis varying implant type.

    PubMed

    Gurgel-Juarez, Nália Cecília; de Almeida, Erika Oliveira; Rocha, Eduardo Passos; Freitas, Amílcar Chagas; Anchieta, Rodolfo Bruniera; de Vargas, Luis Carlos Merçon; Kina, Sidney; França, Fabiana Mantovani Gomes

    2012-04-01

    This study aimed to evaluate stress distribution on peri-implant bone simulating the influence of platform switching in external and internal hexagon implants using three-dimensional finite element analysis. Four mathematical models of a central incisor supported by an implant were created: External Regular model (ER) with 5.0 mm × 11.5 mm external hexagon implant and 5.0 mm abutment (0% abutment shifting), Internal Regular model (IR) with 4.5 mm × 11.5 mm internal hexagon implant and 4.5 mm abutment (0% abutment shifting), External Switching model (ES) with 5.0 mm × 11.5 mm external hexagon implant and 4.1 mm abutment (18% abutment shifting), and Internal Switching model (IS) with 4.5 mm × 11.5 mm internal hexagon implant and 3.8 mm abutment (15% abutment shifting). The models were created by SolidWorks software. The numerical analysis was performed using ANSYS Workbench. Oblique forces (100 N) were applied to the palatal surface of the central incisor. The maximum (σ(max)) and minimum (σ(min)) principal stress, equivalent von Mises stress (σ(vM)), and maximum principal elastic strain (ε(max)) values were evaluated for the cortical and trabecular bone. For cortical bone, the highest stress values (σ(max) and σ(vm) ) (MPa) were observed in IR (87.4 and 82.3), followed by IS (83.3 and 72.4), ER (82 and 65.1), and ES (56.7 and 51.6). For ε(max), IR showed the highest stress (5.46e-003), followed by IS (5.23e-003), ER (5.22e-003), and ES (3.67e-003). For the trabecular bone, the highest stress values (σ(max)) (MPa) were observed in ER (12.5), followed by IS (12), ES (11.9), and IR (4.95). For σ(vM), the highest stress values (MPa) were observed in IS (9.65), followed by ER (9.3), ES (8.61), and IR (5.62). For ε(max) , ER showed the highest stress (5.5e-003), followed by ES (5.43e-003), IS (3.75e-003), and IR (3.15e-003). The influence of platform switching was more evident for cortical bone than for trabecular bone, mainly for the external hexagon

  11. Selected Characteristics, Classified & Unclassified (Regular) Students; Community Colleges, Fall 1978.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. Community Coll. System.

    Fall 1978 enrollment data for Hawaii's community colleges and data on selected characteristics of students enrolled in regular credit programs are presented. Of the 27,880 registrants, 74% were regular students, 1% were early admittees, 6% were registered in non-credit apprenticeship programs, and 18% were in special programs. Regular student…

  12. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  13. Metric dimensional reduction at singularities with implications to Quantum Gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com

    2014-08-15

    A series of old and recent theoretical observations suggests that the quantization of gravity would be feasible, and some problems of Quantum Field Theory would go away if, somehow, the spacetime would undergo a dimensional reduction at high energy scales. But an identification of the deep mechanism causing this dimensional reduction would still be desirable. The main contribution of this article is to show that dimensional reduction effects are due to General Relativity at singularities, and do not need to be postulated ad-hoc. Recent advances in understanding the geometry of singularities do not require modification of General Relativity, being justmore » non-singular extensions of its mathematics to the limit cases. They turn out to work fine for some known types of cosmological singularities (black holes and FLRW Big-Bang), allowing a choice of the fundamental geometric invariants and physical quantities which remain regular. The resulting equations are equivalent to the standard ones outside the singularities. One consequence of this mathematical approach to the singularities in General Relativity is a special, (geo)metric type of dimensional reduction: at singularities, the metric tensor becomes degenerate in certain spacetime directions, and some properties of the fields become independent of those directions. Effectively, it is like one or more dimensions of spacetime just vanish at singularities. This suggests that it is worth exploring the possibility that the geometry of singularities leads naturally to the spontaneous dimensional reduction needed by Quantum Gravity. - Highlights: • The singularities we introduce are described by finite geometric/physical objects. • Our singularities are accompanied by dimensional reduction effects. • They affect the metric, the measure, the topology, the gravitational DOF (Weyl = 0). • Effects proposed in other approaches to Quantum Gravity are obtained naturally. • The geometric dimensional reduction

  14. Flavor and topological current correlators in parity-invariant three-dimensional QED

    NASA Astrophysics Data System (ADS)

    Karthik, Nikhil; Narayanan, Rajamani

    2017-09-01

    We use lattice regularization to study the flow of the flavor-triplet fermion current central charge CJf from its free field value in the ultraviolet limit to its conformal value in the infrared limit of the parity-invariant three-dimensional QED with two flavors of two-component fermions. The dependence of CJf on the scale is weak with a tendency to be below the free field value at intermediate distances. Our numerical data suggest that the flavor-triplet fermion current and the topological current correlators become degenerate within numerical errors in the infrared limit, thereby supporting an enhanced O(4) symmetry predicted by strong self-duality. Further, we demonstrate that fermion dynamics is necessary for the scale-invariant behavior of parity-invariant three-dimensional QED by showing that the pure gauge theory with noncompact gauge action has a nonzero bilinear condensate.

  15. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  16. Three-dimensional color Doppler echocardiographic quantification of tricuspid regurgitation orifice area: comparison with conventional two-dimensional measures.

    PubMed

    Chen, Tien-En; Kwon, Susan H; Enriquez-Sarano, Maurice; Wong, Benjamin F; Mankad, Sunil V

    2013-10-01

    Three-dimensional (3D) color Doppler echocardiography (CDE) provides directly measured vena contracta area (VCA). However, a large comprehensive 3D color Doppler echocardiographic study with sufficiently severe tricuspid regurgitation (TR) to verify its value in determining TR severity in comparison with conventional quantitative and semiquantitative two-dimensional (2D) parameters has not been previously conducted. The aim of this study was to examine the utility and feasibility of directly measured VCA by 3D transthoracic CDE, its correlation with 2D echocardiographic measurements of TR, and its ability to determine severe TR. Ninety-two patients with mild or greater TR prospectively underwent 2D and 3D transthoracic echocardiography. Two-dimensional evaluation of TR severity included the ratio of jet area to right atrial area, vena contracta width, and quantification of effective regurgitant orifice area using the flow convergence method. Full-volume breath-hold 3D color data sets of TR were obtained using a real-time 3D echocardiography system. VCA was directly measured by 3D-guided direct planimetry of the color jet. Subgroup analysis included the presence of a pacemaker, eccentricity of the TR jet, ellipticity of the orifice shape, underlying TR mechanism, and baseline rhythm. Three-dimensional VCA correlated well with effective regurgitant orifice area (r = 0.62, P < .0001), moderately with vena contracta width (r = 0.42, P < .0001), and weakly with jet area/right atrial area ratio. Subgroup analysis comparing 3D VCA with 2D effective regurgitant orifice area demonstrated excellent correlation for organic TR (r = 0.86, P < .0001), regular rhythm (r = 0.78, P < .0001), and circular orifice (r = 0.72, P < .0001) but poor correlation in atrial fibrillation rhythm (r = 0.23, P = .0033). Receiver operating characteristic curve analysis for 3D VCA demonstrated good accuracy for severe TR determination. Three-dimensional VCA measurement is feasible and obtainable

  17. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  18. The Essential Special Education Guide for the Regular Education Teacher

    ERIC Educational Resources Information Center

    Burns, Edward

    2007-01-01

    The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…

  19. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of the...

  20. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of the...

  1. Borderline personality disorder and regularly drinking alcohol before sex.

    PubMed

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  2. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  3. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  4. Manufacture of Regularly Shaped Sol-Gel Pellets

    NASA Technical Reports Server (NTRS)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  5. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  6. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  7. On split regular Hom-Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Albuquerque, Helena; Barreiro, Elisabete; Calderón, A. J.; Sánchez, José M.

    2018-06-01

    We introduce the class of split regular Hom-Lie superalgebras as the natural extension of the one of split Hom-Lie algebras and Lie superalgebras, and study its structure by showing that an arbitrary split regular Hom-Lie superalgebra L is of the form L = U +∑jIj with U a linear subspace of a maximal abelian graded subalgebra H and any Ij a well described (split) ideal of L satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.

  8. One-loop corrections from higher dimensional tree amplitudes

    DOE PAGES

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    2016-08-01

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  9. One-loop corrections from higher dimensional tree amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  10. 20 CFR 226.33 - Spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  11. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  12. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  13. Female non-regular workers in Japan: their current status and health.

    PubMed

    Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu

    2016-12-07

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.

  14. Female non-regular workers in Japan: their current status and health

    PubMed Central

    INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu

    2016-01-01

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453

  15. Chaotic orbits obeying one isolating integral in a four-dimensional map

    NASA Astrophysics Data System (ADS)

    Muzzio, J. C.

    2018-02-01

    We have recently presented strong evidence that chaotic orbits that obey one isolating integral besides energy exist in a toy Hamiltonian model with three degrees of freedom and are bounded by regular orbits that isolate them from the Arnold web. The interval covered by those numerical experiments was equivalent to about one million Hubble times in a galactic context. Here, we use a four-dimensional map to confirm our previous results and to extend that interval 50 times. We show that, at least within that interval, features found in lower dimension Hamiltonian systems and maps are also present in our study, e.g. within the phase space occupied by a chaotic orbit that obeys one integral there are subspaces where that orbit does not enter and are, instead, occupied by regular orbits that, if tori, bound other chaotic orbits obeying one integral and, if cantori, produce stickiness. We argue that the validity of our results might exceed the time intervals covered by the numerical experiments.

  16. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  17. The relationship between lifestyle regularity and subjective sleep quality

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  18. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  19. Spatiotemporal dynamics of oscillatory cellular patterns in three-dimensional directional solidification.

    PubMed

    Bergeon, N; Tourret, D; Chen, L; Debierre, J-M; Guérin, R; Ramirez, A; Billia, B; Karma, A; Trivedi, R

    2013-05-31

    We report results of directional solidification experiments conducted on board the International Space Station and quantitative phase-field modeling of those experiments. The experiments image for the first time in situ the spatially extended dynamics of three-dimensional cellular array patterns formed under microgravity conditions where fluid flow is suppressed. Experiments and phase-field simulations reveal the existence of oscillatory breathing modes with time periods of several 10's of minutes. Oscillating cells are usually noncoherent due to array disorder, with the exception of small areas where the array structure is regular and stable.

  20. Boundary Regularity for the Porous Medium Equation

    NASA Astrophysics Data System (ADS)

    Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana

    2018-05-01

    We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.

  1. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  2. Gauge theory for finite-dimensional dynamical systems.

    PubMed

    Gurfil, Pini

    2007-06-01

    Gauge theory is a well-established concept in quantum physics, electrodynamics, and cosmology. This concept has recently proliferated into new areas, such as mechanics and astrodynamics. In this paper, we discuss a few applications of gauge theory in finite-dimensional dynamical systems. We focus on the concept of rescriptive gauge symmetry, which is, in essence, rescaling of an independent variable. We show that a simple gauge transformation of multiple harmonic oscillators driven by chaotic processes can render an apparently "disordered" flow into a regular dynamical process, and that there exists a strong connection between gauge transformations and reduction theory of ordinary differential equations. Throughout the discussion, we demonstrate the main ideas by considering examples from diverse fields, including quantum mechanics, chemistry, rigid-body dynamics, and information theory.

  3. The United States Regular Education Initiative: Flames of Controversy.

    ERIC Educational Resources Information Center

    Lowenthal, Barbara

    1990-01-01

    Arguments in favor of and against the Regular Education Initiative (REI) are presented. Lack of appropriate qualifications of regular classroom teachers and a lack of empirical evidence on REI effectiveness are cited as some of the problems with the approach. (JDD)

  4. Remarks on regular black holes

    NASA Astrophysics Data System (ADS)

    Nicolini, Piero; Smailagic, Anais; Spallucci, Euro

    Recently, it has been claimed by Chinaglia and Zerbini that the curvature singularity is present even in the so-called regular black hole solutions of the Einstein equations. In this brief note, we show that this criticism is devoid of any physical content.

  5. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  6. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  7. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  8. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  9. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  10. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  11. Low-rank regularization for learning gene expression programs.

    PubMed

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.

  12. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  13. Regular flow reversals in Rayleigh-Bénard convection in a horizontal magnetic field.

    PubMed

    Tasaka, Yuji; Igaki, Kazuto; Yanagisawa, Takatoshi; Vogt, Tobias; Zuerner, Till; Eckert, Sven

    2016-04-01

    Magnetohydrodynamic Rayleigh-Bénard convection was studied experimentally using a liquid metal inside a box with a square horizontal cross section and aspect ratio of five. Systematic flow measurements were performed by means of ultrasonic velocity profiling that can capture time variations of instantaneous velocity profiles. Applying a horizontal magnetic field organizes the convective motion into a flow pattern of quasi-two-dimensional rolls arranged parallel to the magnetic field. The number of rolls has the tendency to decrease with increasing Rayleigh number Ra and to increase with increasing Chandrasekhar number Q. We explored convection regimes in a parameter range, at 2×10^{3}regular flow reversals in which five rolls periodically change the direction of their circulation with gradual skew of the roll axes can be considered as the most remarkable one. The regime appears around a range of Ra/Q=10, where irregular flow reversals were observed in Yanagisawa et al. We performed the proper orthogonal decomposition (POD) analysis on the spatiotemporal velocity distribution and detected that the regular flow reversals can be interpreted as a periodic emergence of a four-roll state in a dominant five-roll state. The POD analysis also provides the definition of the effective number of rolls as a more objective approach.

  14. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    PubMed

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings

  15. Subcortical processing of speech regularities underlies reading and music aptitude in children

    PubMed Central

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input

  16. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  17. Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres.

    PubMed

    Banerjee, Arindam; Ghosh, Joydeep

    2004-05-01

    Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.

  18. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  19. 5 CFR 532.203 - Structure of regular wage schedules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Structure of regular wage schedules. 532.203 Section 532.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.203 Structure of regular wage schedules. (a...

  20. Endemic infections are always possible on regular networks

    NASA Astrophysics Data System (ADS)

    Del Genio, Charo I.; House, Thomas

    2013-10-01

    We study the dependence of the largest component in regular networks on the clustering coefficient, showing that its size changes smoothly without undergoing a phase transition. We explain this behavior via an analytical approach based on the network structure, and provide an exact equation describing the numerical results. Our work indicates that intrinsic structural properties always allow the spread of epidemics on regular networks.

  1. Single-cycle high-intensity electromagnetic pulse generation in the interaction of a plasma wakefield with regular nonlinear structures.

    PubMed

    Bulanov, S S; Esirkepov, T Zh; Kamenets, F F; Pegoraro, F

    2006-03-01

    The interaction of regular nonlinear structures (such as subcycle solitons, electron vortices, and wake Langmuir waves) with a strong wake wave in a collisionless plasma can be exploited in order to produce ultrashort electromagnetic pulses. The electromagnetic field of the nonlinear structure is partially reflected by the electron density modulations of the incident wake wave and a single-cycle high-intensity electromagnetic pulse is formed. Due to the Doppler effect the length of this pulse is much shorter than that of the nonlinear structure. This process is illustrated with two-dimensional particle-in-cell simulations. The considered laser-plasma interaction regimes can be achieved in present day experiments and can be used for plasma diagnostics.

  2. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    PubMed

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  3. Diffraction of a shock wave by a compression corner; regular and single Mach reflection

    NASA Technical Reports Server (NTRS)

    Vijayashankar, V. S.; Kutler, P.; Anderson, D.

    1976-01-01

    The two dimensional, time dependent Euler equations which govern the flow field resulting from the injection of a planar shock with a compression corner are solved with initial conditions that result in either regular reflection or single Mach reflection of the incident planar shock. The Euler equations which are hyperbolic are transformed to include the self similarity of the problem. A normalization procedure is employed to align the reflected shock and the Mach stem as computational boundaries to implement the shock fitting procedure. A special floating fitting scheme is developed in conjunction with the method of characteristics to fit the slip surface. The reflected shock, the Mach stem, and the slip surface are all treated as harp discontinuities, thus, resulting in a more accurate description of the inviscid flow field. The resulting numerical solutions are compared with available experimental data and existing first-order, shock-capturing numerical solutions.

  4. 76 FR 3629 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630

  5. Production of Supra-regular Spatial Sequences by Macaque Monkeys.

    PubMed

    Jiang, Xinjian; Long, Tenghai; Cao, Weicong; Li, Junru; Dehaene, Stanislas; Wang, Liping

    2018-06-18

    Understanding and producing embedded sequences in language, music, or mathematics, is a central characteristic of our species. These domains are hypothesized to involve a human-specific competence for supra-regular grammars, which can generate embedded sequences that go beyond the regular sequences engendered by finite-state automata. However, is this capacity truly unique to humans? Using a production task, we show that macaque monkeys can be trained to produce time-symmetrical embedded spatial sequences whose formal description requires supra-regular grammars or, equivalently, a push-down stack automaton. Monkeys spontaneously generalized the learned grammar to novel sequences, including longer ones, and could generate hierarchical sequences formed by an embedding of two levels of abstract rules. Compared to monkeys, however, preschool children learned the grammars much faster using a chunking strategy. While supra-regular grammars are accessible to nonhuman primates through extensive training, human uniqueness may lie in the speed and learning strategy with which they are acquired. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Optimal boundary regularity for a singular Monge-Ampère equation

    NASA Astrophysics Data System (ADS)

    Jian, Huaiyu; Li, You

    2018-06-01

    In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.

  7. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  8. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time

  9. 75 FR 1057 - Farm Credit Administration Board; Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... FARM CREDIT ADMINISTRATION Farm Credit Administration Board; Regular Meeting AGENCY: Farm Credit Administration. SUMMARY: Notice is hereby given, pursuant to the Government in the Sunshine Act (5 U.S.C. 552b(e)(3)), of the regular meeting of the Farm Credit Administration Board (Board). Date and Time: The...

  10. Lipschitz regularity results for nonlinear strictly elliptic equations and applications

    NASA Astrophysics Data System (ADS)

    Ley, Olivier; Nguyen, Vinh Duc

    2017-10-01

    Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.

  11. Processing SPARQL queries with regular expressions in RDF databases

    PubMed Central

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  12. Processing SPARQL queries with regular expressions in RDF databases.

    PubMed

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  13. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  14. Adding statistical regularity results in a global slowdown in visual search.

    PubMed

    Vaskevich, Anna; Luria, Roy

    2018-05-01

    Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  16. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  17. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  18. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  19. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  20. 12 CFR 311.5 - Regular procedure for closing meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Regular procedure for closing meetings. 311.5 Section 311.5 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5 Regular...

  1. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  2. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  3. The neural substrates of impaired finger tapping regularity after stroke.

    PubMed

    Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude

    2010-03-01

    Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  4. Power Enhancement in High Dimensional Cross-Sectional Tests

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Yao, Jiawei

    2016-01-01

    We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846

  5. Four Data Based Objections to the Regular Education Initiative.

    ERIC Educational Resources Information Center

    Anderegg, M. L.; Vergason, Glenn A.

    One of the changes advocated by the Regular Education Initiative (REI) is the placement of all students with disabilities in regular education classes. This paper analyzes this REI proposal and discusses four objections, with citations to relevant literature: (1) restriction of the continuum of services, which may result in students being put…

  6. Inclusion Professional Development Model and Regular Middle School Educators

    ERIC Educational Resources Information Center

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  7. Gauge theory for finite-dimensional dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurfil, Pini

    2007-06-15

    Gauge theory is a well-established concept in quantum physics, electrodynamics, and cosmology. This concept has recently proliferated into new areas, such as mechanics and astrodynamics. In this paper, we discuss a few applications of gauge theory in finite-dimensional dynamical systems. We focus on the concept of rescriptive gauge symmetry, which is, in essence, rescaling of an independent variable. We show that a simple gauge transformation of multiple harmonic oscillators driven by chaotic processes can render an apparently ''disordered'' flow into a regular dynamical process, and that there exists a strong connection between gauge transformations and reduction theory of ordinary differentialmore » equations. Throughout the discussion, we demonstrate the main ideas by considering examples from diverse fields, including quantum mechanics, chemistry, rigid-body dynamics, and information theory.« less

  8. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  9. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  10. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  11. A simple way to measure daily lifestyle regularity

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.

    2002-01-01

    A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5

  12. 3D undersampled golden-radial phase encoding for DCE-MRA using inherently regularized iterative SENSE.

    PubMed

    Prieto, Claudia; Uribe, Sergio; Razavi, Reza; Atkinson, David; Schaeffter, Tobias

    2010-08-01

    One of the current limitations of dynamic contrast-enhanced MR angiography is the requirement of both high spatial and high temporal resolution. Several undersampling techniques have been proposed to overcome this problem. However, in most of these methods the tradeoff between spatial and temporal resolution is constant for all the time frames and needs to be specified prior to data collection. This is not optimal for dynamic contrast-enhanced MR angiography where the dynamics of the process are difficult to predict and the image quality requirements are changing during the bolus passage. Here, we propose a new highly undersampled approach that allows the retrospective adaptation of the spatial and temporal resolution. The method combines a three-dimensional radial phase encoding trajectory with the golden angle profile order and non-Cartesian Sensitivity Encoding (SENSE) reconstruction. Different regularization images, obtained from the same acquired data, are used to stabilize the non-Cartesian SENSE reconstruction for the different phases of the bolus passage. The feasibility of the proposed method was demonstrated on a numerical phantom and in three-dimensional intracranial dynamic contrast-enhanced MR angiography of healthy volunteers. The acquired data were reconstructed retrospectively with temporal resolutions from 1.2 sec to 8.1 sec, providing a good depiction of small vessels, as well as distinction of different temporal phases.

  13. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  14. Extension of Strongly Regular Graphs

    DTIC Science & Technology

    2008-02-11

    E.R. van Dam, W.H. Haemers. Graphs with constant µ and µ. Discrete Math . 182 (1998), no. 1-3, 293–307. [11] E.R. van Dam, E. Spence. Small regular...graphs with four eigenvalues. Discrete Math . 189 (1998), 233-257. the electronic journal of combinatorics 15 (2008), #N3 5

  15. Academic Improvement through Regular Assessment

    ERIC Educational Resources Information Center

    Wolf, Patrick J.

    2007-01-01

    Media reports are rife with claims that students in the United States are overtested and that they and their education are suffering as result. Here I argue the opposite--that students would benefit in numerous ways from more frequent assessment, especially of diagnostic testing. The regular assessment of students serves critical educational and…

  16. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  17. Buckling of a growing tissue and the emergence of two-dimensional patterns☆

    PubMed Central

    Nelson, M.R.; King, J.R.; Jensen, O.E.

    2013-01-01

    The process of biological growth and the associated generation of residual stress has previously been considered as a driving mechanism for tissue buckling and pattern selection in numerous areas of biology. Here, we develop a two-dimensional thin plate theory to simulate the growth of cultured intestinal epithelial cells on a deformable substrate, with the goal of elucidating how a tissue engineer might best recreate the regular array of invaginations (crypts of Lieberkühn) found in the wall of the mammalian intestine. We extend the standard von Kármán equations to incorporate inhomogeneity in the plate’s mechanical properties and surface stresses applied to the substrate by cell proliferation. We determine numerically the configurations of a homogeneous plate under uniform cell growth, and show how tethering to an underlying elastic foundation can be used to promote higher-order buckled configurations. We then examine the independent effects of localised softening of the substrate and spatial patterning of cellular growth, demonstrating that (within a two-dimensional framework, and contrary to the predictions of one-dimensional models) growth patterning constitutes a more viable mechanism for control of crypt distribution than does material inhomogeneity. PMID:24128749

  18. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  19. Particle motion and Penrose processes around rotating regular black hole

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.

  20. Correction of engineering servicing regularity of transporttechnological machines in operational process

    NASA Astrophysics Data System (ADS)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  1. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...

  2. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    NASA Astrophysics Data System (ADS)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  3. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Kinematics of swimming of the manta ray: three-dimensional analysis of open-water maneuverability.

    PubMed

    Fish, Frank E; Kolpas, Allison; Crossett, Andrew; Dudas, Michael A; Moored, Keith W; Bart-Smith, Hilary

    2018-03-22

    For aquatic animals, turning maneuvers represent a locomotor activity that may not be confined to a single coordinate plane, making analysis difficult, particularly in the field. To measure turning performance in a three-dimensional space for the manta ray ( Mobula birostris ), a large open-water swimmer, scaled stereo video recordings were collected. Movements of the cephalic lobes, eye and tail base were tracked to obtain three-dimensional coordinates. A mathematical analysis was performed on the coordinate data to calculate the turning rate and curvature (1/turning radius) as a function of time by numerically estimating the derivative of manta trajectories through three-dimensional space. Principal component analysis was used to project the three-dimensional trajectory onto the two-dimensional turn. Smoothing splines were applied to these turns. These are flexible models that minimize a cost function with a parameter controlling the balance between data fidelity and regularity of the derivative. Data for 30 sequences of rays performing slow, steady turns showed the highest 20% of values for the turning rate and smallest 20% of turn radii were 42.65±16.66 deg s -1 and 2.05±1.26 m, respectively. Such turning maneuvers fall within the range of performance exhibited by swimmers with rigid bodies. © 2018. Published by The Company of Biologists Ltd.

  5. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  6. Regular aspirin use and lung cancer risk.

    PubMed

    Moysich, Kirsten B; Menezes, Ravi J; Ronsani, Adrienne; Swede, Helen; Reid, Mary E; Cummings, K Michael; Falkner, Karen L; Loewen, Gregory M; Bepler, Gerold

    2002-11-26

    Although a large number of epidemiological studies have examined the role of aspirin in the chemoprevention of colon cancer and other solid tumors, there is a limited body of research focusing on the association between aspirin and lung cancer risk. We conducted a hospital-based case-control study to evaluate the role of regular aspirin use in lung cancer etiology. Study participants included 868 cases with primary, incident lung cancer and 935 hospital controls with non-neoplastic conditions who completed a comprehensive epidemiological questionnaire. Participants were classified as regular aspirin users if they had taken the drug at least once a week for at least one year. Results indicated that lung cancer risk was significantly lower for aspirin users compared to non-users (adjusted OR = 0.57; 95% CI 0.41-0.78). Although there was no clear evidence of a dose-response relationship, we observed risk reductions associated with greater frequency of use. Similarly, prolonged duration of use and increasing tablet years (tablets per day x years of use) was associated with reduced lung cancer risk. Risk reductions were observed in both sexes, but significant dose response relationships were only seen among male participants. When the analyses were restricted to former and current smokers, participants with the lowest cigarette exposure tended to benefit most from the potential chemopreventive effect of aspirin. After stratification by histology, regular aspirin use was significantly associated with reduced risk of small cell lung cancer and non-small cell lung cancer. Overall, results from this hospital-based case-control study suggest that regular aspirin use may be associated with reduced risk of lung cancer.

  7. Regular aspirin use and lung cancer risk

    PubMed Central

    Moysich, Kirsten B; Menezes, Ravi J; Ronsani, Adrienne; Swede, Helen; Reid, Mary E; Cummings, K Michael; Falkner, Karen L; Loewen, Gregory M; Bepler, Gerold

    2002-01-01

    Background Although a large number of epidemiological studies have examined the role of aspirin in the chemoprevention of colon cancer and other solid tumors, there is a limited body of research focusing on the association between aspirin and lung cancer risk. Methods We conducted a hospital-based case-control study to evaluate the role of regular aspirin use in lung cancer etiology. Study participants included 868 cases with primary, incident lung cancer and 935 hospital controls with non-neoplastic conditions who completed a comprehensive epidemiological questionnaire. Participants were classified as regular aspirin users if they had taken the drug at least once a week for at least one year. Results Results indicated that lung cancer risk was significantly lower for aspirin users compared to non-users (adjusted OR = 0.57; 95% CI 0.41–0.78). Although there was no clear evidence of a dose-response relationship, we observed risk reductions associated with greater frequency of use. Similarly, prolonged duration of use and increasing tablet years (tablets per day × years of use) was associated with reduced lung cancer risk. Risk reductions were observed in both sexes, but significant dose response relationships were only seen among male participants. When the analyses were restricted to former and current smokers, participants with the lowest cigarette exposure tended to benefit most from the potential chemopreventive effect of aspirin. After stratification by histology, regular aspirin use was significantly associated with reduced risk of small cell lung cancer and non-small cell lung cancer. Conclusions Overall, results from this hospital-based case-control study suggest that regular aspirin use may be associated with reduced risk of lung cancer. PMID:12453317

  8. Statistical Regularities Attract Attention when Task-Relevant.

    PubMed

    Alamia, Andrea; Zénon, Alexandre

    2016-01-01

    Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e., the effect of color predictability on reaction times (RTs), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the two colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.

  9. Catalytic micromotor generating self-propelled regular motion through random fluctuation.

    PubMed

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-21

    Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  10. Catalytic micromotor generating self-propelled regular motion through random fluctuation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-01

    Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  11. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  12. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  13. Regularized Laplacian determinants of self-similar fractals

    NASA Astrophysics Data System (ADS)

    Chen, Joe P.; Teplyaev, Alexander; Tsougkas, Konstantinos

    2018-06-01

    We study the spectral zeta functions of the Laplacian on fractal sets which are locally self-similar fractafolds, in the sense of Strichartz. These functions are known to meromorphically extend to the entire complex plane, and the locations of their poles, sometimes referred to as complex dimensions, are of special interest. We give examples of locally self-similar sets such that their complex dimensions are not on the imaginary axis, which allows us to interpret their Laplacian determinant as the regularized product of their eigenvalues. We then investigate a connection between the logarithm of the determinant of the discrete graph Laplacian and the regularized one.

  14. Regularity and Tresse's theorem for geometric structures

    NASA Astrophysics Data System (ADS)

    Sarkisyan, R. A.; Shandra, I. G.

    2008-04-01

    For any non-special bundle P\\to X of geometric structures we prove that the k-jet space J^k of this bundle with an appropriate k contains an open dense domain U_k on which Tresse's theorem holds. For every s\\geq k we prove that the pre-image \\pi^{-1}(k,s)(U_k) of U_k under the natural projection \\pi(k,s)\\colon J^s\\to J^k consists of regular points. (A point of J^s is said to be regular if the orbits of the group of diffeomorphisms induced from X have locally constant dimension in a neighbourhood of this point.)

  15. More academics in regular schools? The effect of regular versus special school placement on academic skills in Dutch primary school students with Down syndrome.

    PubMed

    de Graaf, G; van Hove, G; Haveman, M

    2013-01-01

    Studies from the UK have shown that children with Down syndrome acquire more academic skills in regular education. Does this likewise hold true for the Dutch situation, even after the effect of selective placement has been taken into account? In 2006, an extensive questionnaire was sent to 160 parents of (specially and regularly placed) children with Down syndrome (born 1993-2000) in primary education in the Netherlands with a response rate of 76%. Questions were related to the child's school history, academic and non-academic skills, intelligence quotient, parental educational level, the extent to which parents worked on academics with their child at home, and the amount of academic instructional time at school. Academic skills were predicted with the other variables as independents. For the children in regular schools much more time proved to be spent on academics. Academic performance appeared to be predicted reasonably well on the basis of age, non-academic skills, parental educational level and the extent to which parents worked at home on academics. However, more variance could be predicted when the total amount of years that the child spent in regular education was added, especially regarding reading and to a lesser extent regarding writing and math. In addition, we could prove that this finding could not be accounted for by endogenity. Regularly placed children with Down syndrome learn more academics. However, this is not a straight consequence of inclusive placement and age alone, but is also determined by factors such as cognitive functioning, non-academic skills, parental educational level and the extent to which parents worked at home on academics. Nevertheless, it could be proven that the more advanced academic skills of the regularly placed children are not only due to selective placement. The positive effect of regular school on academics appeared to be most pronounced for reading skills. © 2011 The Authors. Journal of Intellectual Disability

  16. On the Distinction between Regular and Irregular Inflectional Morphology: Evidence from Dinka

    ERIC Educational Resources Information Center

    Ladd, D. Robert; Remijsen, Bert; Manyang, Caguor Adong

    2009-01-01

    Discussions of the psycholinguistic significance of regularity in inflectional morphology generally deal with languages in which regular forms can be clearly identified and revolve around whether there are distinct processing mechanisms for regular and irregular forms. We present a detailed description of Dinka's notoriously irregular noun number…

  17. Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2005-01-01

    For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.

  18. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  19. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  20. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  1. Pointwise regularity of parameterized affine zipper fractal curves

    NASA Astrophysics Data System (ADS)

    Bárány, Balázs; Kiss, Gergely; Kolossváry, István

    2018-05-01

    We study the pointwise regularity of zipper fractal curves generated by affine mappings. Under the assumption of dominated splitting of index-1, we calculate the Hausdorff dimension of the level sets of the pointwise Hölder exponent for a subinterval of the spectrum. We give an equivalent characterization for the existence of regular pointwise Hölder exponent for Lebesgue almost every point. In this case, we extend the multifractal analysis to the full spectrum. In particular, we apply our results for de Rham’s curve.

  2. Vacuum polarization and classical self-action near higher-dimensional defects

    NASA Astrophysics Data System (ADS)

    Grats, Yuri V.; Spirin, Pavel

    2017-02-01

    We analyze the gravity-induced effects associated with a massless scalar field in a higher-dimensional spacetime being the tensor product of (d-n)-dimensional Minkowski space and n-dimensional spherically/cylindrically symmetric space with a solid/planar angle deficit. These spacetimes are considered as simple models for a multidimensional global monopole (if n≥slant 3) or cosmic string (if n=2) with (d-n-1) flat extra dimensions. Thus, we refer to them as conical backgrounds. In terms of the angular-deficit value, we derive the perturbative expression for the scalar Green function, valid for any d≥slant 3 and 2≤slant n≤slant d-1, and compute it to the leading order. With the use of this Green function we compute the renormalized vacuum expectation value of the field square {< φ {2}(x)rangle }_{ren} and the renormalized vacuum averaged of the scalar-field energy-momentum tensor {< T_{M N}(x)rangle }_{ren} for arbitrary d and n from the interval mentioned above and arbitrary coupling constant to the curvature ξ . In particular, we revisit the computation of the vacuum polarization effects for a non-minimally coupled massless scalar field in the spacetime of a straight cosmic string. The same Green function enables to consider the old purely classical problem of the gravity-induced self-action of a classical point-like scalar or electric charge, placed at rest at some fixed point of the space under consideration. To deal with divergences, which appear in consideration of the two problems, we apply the dimensional-regularization technique, widely used in quantum field theory. The explicit dependence of the results upon the dimensionalities of both the bulk and conical submanifold is discussed.

  3. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  4. Predictors of regular cigarette smoking among adolescent females: Does body image matter?

    PubMed Central

    Kaufman, Annette R.; Augustson, Erik M.

    2013-01-01

    This study examined how factors associated with body image predict regular smoking in adolescent females. Data were from the National Longitudinal Study of Adolescent Health (Add Health), a study of health-related behaviors in a nationally representative sample of adolescents in grades 7 through 12. Females in Waves I and II (n=6,956) were used for this study. Using SUDAAN to adjust for the sampling frame, univariate and multivariate analyses were performed to investigate if baseline body image factors, including perceived weight, perceived physical development, trying to lose weight, and self-esteem, were predictive of regular smoking status 1 year later. In univariate analyses, perceived weight (p<.01), perceived physical development (p<.0001), trying to lose weight (p<.05), and self-esteem (p<.0001) significantly predicted regular smoking 1 year later. In the logistic regression model, perceived physical development (p<.05), and self-esteem (p<.001) significantly predicted regular smoking. The more developed a female reported being in comparison to other females her age, the more likely she was to be a regular smoker. Lower self-esteem was predictive of regular smoking. Perceived weight and trying to lose weight failed to reach statistical significance in the multivariate model. This current study highlights the importance of perceived physical development and self-esteem when predicting regular smoking in adolescent females. Efforts to promote positive self-esteem in young females may be an important strategy when creating interventions to reduce regular cigarette smoking. PMID:18686177

  5. An ERP study of regular and irregular English past tense inflection.

    PubMed

    Newman, Aaron J; Ullman, Michael T; Pancheva, Roumyana; Waligura, Diane L; Neville, Helen J

    2007-01-01

    Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs ("regulars") is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, "irregular" English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over midline parietal sites (P600s), and seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory.

  6. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  7. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The

  8. Three-dimensional imaging of the brain cavities in human embryos.

    PubMed

    Blaas, H G; Eik-Nes, S H; Kiserud, T; Berg, S; Angelsen, B; Olstad, B

    1995-04-01

    A system for high-resolution three-dimensional imaging of small structures has been developed, based on the Vingmed CFM-800 annular array sector scanner with a 7.5-MHz transducer attached to a PC-based TomTec Echo-Scan unit. A stepper motor rotates the transducer 180 degrees and the complete three-dimensional scan consists of 132 two-dimensional images, video-grabbed and scan-converted into a regular volumetric data set by the TomTec unit. Three normal pregnancies with embryos of gestational age 7, 9 and 10 weeks received a transvaginal examination with special attention to the embryonic/fetal brain. In all three cases, it was possible to obtain high-resolution images of the brain cavities. At 7 weeks, both hemispheres and their connection to the third ventricle were delineated. The isthmus rhombencephali could be visualized. At 9 weeks, the continuous development of the brain cavities could be followed and at 11 weeks the dominating size of the hemispheres could be depicted. It is concluded that present ultrasound technology has reached a stage where structures of only a few millimeters can be imaged in vivo in three-dimensions with a quality that resembles the plaster figures used in embryonic laboratories. The method can become an important tool in future embryological research and also in the detection of early developmental disorders of the embryo.

  9. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  10. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  11. Sudden emergence of q-regular subgraphs in random graphs

    NASA Astrophysics Data System (ADS)

    Pretti, M.; Weigt, M.

    2006-07-01

    We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.

  12. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  13. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  14. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  15. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  16. Three-dimensional HDlive imaging of an umbilical cord cyst.

    PubMed

    Inubashiri, Eisuke; Nishiyama, Naomi; Tatedo, Sayuri; Minami, Hiina; Saitou, Atushi; Watanabe, Yukio; Sugawara, Masaki

    2018-04-01

    Umbilical cord cysts (UCC) are a rare congenital malformation. Previous reports have suggested that the second- and third-trimester UCC may be associated with other structural anomalies or chromosomal abnormalities. Therefore, high-quality imaging is clinically important for the antenatal diagnosis of UCC and to conduct a precise anatomical survey of intrauterine abnormalities. There have been few reports of antenatal diagnosis of UCC with the conventional two- and three-dimensional ultrasonography. In this report, we demonstrate the novel visual depiction of UCC in utero with three-dimensional HDlive imaging, which helps substantially with prenatal diagnosis. A case with an abnormal placental mass at 16 weeks and 5 days of gestation was observed in detail using HDlive. HDlive revealed very realistic images of the intrauterine abnormality: the oval lesion was smooth with regular contours and a homogenous wall at the site of cord insertion on the placenta. In addition, we confirmed the absent of umbilical cord, placental, and fetal structural anomalies. Here, we report a case wherein HDlive may have provided clinically valuable information for prenatal diagnosis of UCC and offered a potential advantage relative to the conventional US.

  17. Local Regularity Analysis with Wavelet Transform in Gear Tooth Failure Detection

    NASA Astrophysics Data System (ADS)

    Nissilä, Juhani

    2017-09-01

    Diagnosing gear tooth and bearing failures in industrial power transition situations has been studied a lot but challenges still remain. This study aims to look at the problem from a more theoretical perspective. Our goal is to find out if the local regularity i.e. smoothness of the measured signal can be estimated from the vibrations of epicyclic gearboxes and if the regularity can be linked to the meshing events of the gear teeth. Previously it has been shown that the decreasing local regularity of the measured acceleration signals can reveal the inner race faults in slowly rotating bearings. The local regularity is estimated from the modulus maxima ridges of the signal's wavelet transform. In this study, the measurements come from the epicyclic gearboxes of the Kelukoski water power station (WPS). The very stable rotational speed of the WPS makes it possible to deduce that the gear mesh frequencies of the WPS and a frequency related to the rotation of the turbine blades are the most significant components in the spectra of the estimated local regularity signals.

  18. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  19. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  20. The Temporal Dynamics of Regularity Extraction in Non-Human Primates

    ERIC Educational Resources Information Center

    Minier, Laure; Fagot, Joël; Rey, Arnaud

    2016-01-01

    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…

  1. 12 CFR 407.3 - Procedures applicable to regularly scheduled meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Procedures applicable to regularly scheduled meetings. 407.3 Section 407.3 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES REGULATIONS GOVERNING PUBLIC OBSERVATION OF EX-IM BANK MEETINGS § 407.3 Procedures applicable to regularly scheduled...

  2. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Disability for any regular employment... Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is... employment since before age 22. To meet this definition of disability, a claimant must have a severe...

  3. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Disability for any regular employment... Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is... employment since before age 22. To meet this definition of disability, a claimant must have a severe...

  4. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Disability for any regular employment... Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is... employment since before age 22. To meet this definition of disability, a claimant must have a severe...

  5. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  6. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  7. Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points

    NASA Astrophysics Data System (ADS)

    Adamowicz, Tomasz; Björn, Anders; Björn, Jana

    2014-11-01

    We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.

  8. Regular black holes in Einstein-Gauss-Bonnet gravity

    NASA Astrophysics Data System (ADS)

    Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.

    2018-05-01

    Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.

  9. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    NASA Astrophysics Data System (ADS)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  10. Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2018-12-01

    The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.

  11. Three-dimensional object recognition based on planar images

    NASA Astrophysics Data System (ADS)

    Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.

    1993-01-01

    This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.

  12. Adiabatic regularization for gauge fields and the conformal anomaly

    NASA Astrophysics Data System (ADS)

    Chu, Chong-Sun; Koyama, Yoji

    2017-03-01

    Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.

  13. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  14. On the regularization for nonlinear tomographic absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Dai, Jinghang; Yu, Tao; Xu, Lijun; Cai, Weiwei

    2018-02-01

    Tomographic absorption spectroscopy (TAS) has attracted increased research efforts recently due to the development in both hardware and new imaging concepts such as nonlinear tomography and compressed sensing. Nonlinear TAS is one of the emerging modality that bases on the concept of nonlinear tomography and has been successfully demonstrated both numerically and experimentally. However, all the previous demonstrations were realized using only two orthogonal projections simply for ease of implementation. In this work, we examine the performance of nonlinear TAS using other beam arrangements and test the effectiveness of the beam optimization technique that has been developed for linear TAS. In addition, so far only smoothness prior has been adopted and applied in nonlinear TAS. Nevertheless, there are also other useful priors such as sparseness and model-based prior which have not been investigated yet. This work aims to show how these priors can be implemented and included in the reconstruction process. Regularization through Bayesian formulation will be introduced specifically for this purpose, and a method for the determination of a proper regularization factor will be proposed. The comparative studies performed with different beam arrangements and regularization schemes on a few representative phantoms suggest that the beam optimization method developed for linear TAS also works for the nonlinear counterpart and the regularization scheme should be selected properly according to the available a priori information under specific application scenarios so as to achieve the best reconstruction fidelity. Though this work is conducted under the context of nonlinear TAS, it can also provide useful insights for other tomographic modalities.

  15. Power-law regularities in human language

    NASA Astrophysics Data System (ADS)

    Mehri, Ali; Lashkari, Sahar Mohammadpour

    2016-11-01

    Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.

  16. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    NASA Astrophysics Data System (ADS)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  17. Implications of the Regular Education Initiative Debate for School Psychologists.

    ERIC Educational Resources Information Center

    Davis, William E.

    The paper examines critical issues involved in the debate over the Regular Education Initiative (REI) to merge special and regular education, with emphasis on implications for school psychologists. The arguments of proponents and opponents of the REI are summarized and the lack of involvement by school psychologists is noted. The REI is seen to…

  18. The Effects of Regular Exercise on the Physical Fitness Levels

    ERIC Educational Resources Information Center

    Kirandi, Ozlem

    2016-01-01

    The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…

  19. Myth 13: The Regular Classroom Teacher Can "Go It Alone"

    ERIC Educational Resources Information Center

    Sisk, Dorothy

    2009-01-01

    With most gifted students being educated in a mainstream model of education, the prevailing myth that the regular classroom teacher can "go it alone" and the companion myth that the teacher can provide for the education of gifted students through differentiation are alive and well. In reality, the regular classroom teacher is too often concerned…

  20. Buckling of a growing tissue and the emergence of two-dimensional patterns.

    PubMed

    Nelson, M R; King, J R; Jensen, O E

    2013-12-01

    The process of biological growth and the associated generation of residual stress has previously been considered as a driving mechanism for tissue buckling and pattern selection in numerous areas of biology. Here, we develop a two-dimensional thin plate theory to simulate the growth of cultured intestinal epithelial cells on a deformable substrate, with the goal of elucidating how a tissue engineer might best recreate the regular array of invaginations (crypts of Lieberkühn) found in the wall of the mammalian intestine. We extend the standard von Kármán equations to incorporate inhomogeneity in the plate's mechanical properties and surface stresses applied to the substrate by cell proliferation. We determine numerically the configurations of a homogeneous plate under uniform cell growth, and show how tethering to an underlying elastic foundation can be used to promote higher-order buckled configurations. We then examine the independent effects of localised softening of the substrate and spatial patterning of cellular growth, demonstrating that (within a two-dimensional framework, and contrary to the predictions of one-dimensional models) growth patterning constitutes a more viable mechanism for control of crypt distribution than does material inhomogeneity. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Thermodynamics of a class of regular black holes with a generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Maluf, R. V.; Neves, Juliano C. S.

    2018-05-01

    In this article, we present a study on thermodynamics of a class of regular black holes. Such a class includes Bardeen and Hayward regular black holes. We obtained thermodynamic quantities like the Hawking temperature, entropy, and heat capacity for the entire class. As part of an effort to indicate some physical observable to distinguish regular black holes from singular black holes, we suggest that regular black holes are colder than singular black holes. Besides, contrary to the Schwarzschild black hole, that class of regular black holes may be thermodynamically stable. From a generalized uncertainty principle, we also obtained the quantum-corrected thermodynamics for the studied class. Such quantum corrections provide a logarithmic term for the quantum-corrected entropy.

  2. Fermion-number violation in regularizations that preserve fermion-number symmetry

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Shamir, Yigal

    2003-01-01

    There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.

  3. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    NASA Astrophysics Data System (ADS)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  4. Global Regularity for Several Incompressible Fluid Models with Partial Dissipation

    NASA Astrophysics Data System (ADS)

    Wu, Jiahong; Xu, Xiaojing; Ye, Zhuan

    2017-09-01

    This paper examines the global regularity problem on several 2D incompressible fluid models with partial dissipation. They are the surface quasi-geostrophic (SQG) equation, the 2D Euler equation and the 2D Boussinesq equations. These are well-known models in fluid mechanics and geophysics. The fundamental issue of whether or not they are globally well-posed has attracted enormous attention. The corresponding models with partial dissipation may arise in physical circumstances when the dissipation varies in different directions. We show that the SQG equation with either horizontal or vertical dissipation always has global solutions. This is in sharp contrast with the inviscid SQG equation for which the global regularity problem remains outstandingly open. Although the 2D Euler is globally well-posed for sufficiently smooth data, the associated equations with partial dissipation no longer conserve the vorticity and the global regularity is not trivial. We are able to prove the global regularity for two partially dissipated Euler equations. Several global bounds are also obtained for a partially dissipated Boussinesq system.

  5. Elementary Particle Spectroscopy in Regular Solid Rewrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trell, Erik

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology andmore » neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed

  6. Construction of normal-regular decisions of Bessel typed special system

    NASA Astrophysics Data System (ADS)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  7. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  8. Regularity for Fully Nonlinear Elliptic Equations with Oblique Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Zhang, Kai

    2018-06-01

    In this paper, we obtain a series of regularity results for viscosity solutions of fully nonlinear elliptic equations with oblique derivative boundary conditions. In particular, we derive the pointwise C α, C 1,α and C 2,α regularity. As byproducts, we also prove the A-B-P maximum principle, Harnack inequality, uniqueness and solvability of the equations.

  9. The persistence of the attentional bias to regularities in a changing environment.

    PubMed

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  10. Structural characterization of the packings of granular regular polygons.

    PubMed

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  11. 5 CFR 532.221 - Industries included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Industries included in regular... CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.221 Industries... American Industry Classification System (NAICS) codes in all regular nonappropriated fund wage surveys...

  12. On split regular BiHom-Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Liangyun; Zhang, Chiping

    2018-06-01

    We introduce the class of split regular BiHom-Lie superalgebras as the natural extension of the one of split Hom-Lie superalgebras and the one of split Lie superalgebras. By developing techniques of connections of roots for this kind of algebras, we show that such a split regular BiHom-Lie superalgebra L is of the form L = U +∑ [ α ] ∈ Λ / ∼I[α] with U a subspace of the Abelian (graded) subalgebra H and any I[α], a well described (graded) ideal of L, satisfying [I[α] ,I[β] ] = 0 if [ α ] ≠ [ β ] . Under certain conditions, in the case of L being of maximal length, the simplicity of the algebra is characterized and it is shown that L is the direct sum of the family of its simple (graded) ideals.

  13. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  14. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  15. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  16. Self-oscillations of a two-dimensional shear flow with forcing and dissipation

    NASA Astrophysics Data System (ADS)

    López Zazueta, A.; Zavala Sansón, L.

    2018-04-01

    Two-dimensional shear flows continuously forced in the presence of dissipative effects are studied by means of numerical simulations. In contrast with most previous studies, the forcing is confined in a finite region, so the behavior of the system is characterized by the long-term evolution of the global kinetic energy. We consider regimes with 1 < Reλ << Re, where Reλ is the Reynolds number associated with an external friction (such as bottom friction in quasi-two-dimensional flows), and Re is the traditional Reynolds number associated with Laplacian viscosity. Depending on Reλ, the flow may develop Kelvin-Helmholtz instabilities that exhibit either regular or irregular oscillations. The results are discussed in two parts. First, the flow is limited to develop only one vortical instability by choosing an appropriate width of the forcing band. The most relevant regime is found for Reλ > 36, in which the energy maintains a regular oscillation around a reference value. The flow configuration is an elliptical vortex tilted with respect to the forcing axis, which oscillates steadily also. Second, the flow is allowed to develop two Kelvin-Helmholtz billows and eventually more complicated structures. The regimes of the one-vortex case are observed again, except for Reλ > 135. At these values, the energy oscillates chaotically as the two vortices merge, form dipolar structures, and split again, with irregular periodicity. The self-oscillations are explained as a result of the alternate competition between forcing and dissipation, which is verified by calculating the budget terms in the energy equation. The relevance of the forcing-vs.-dissipation competition is discussed for more general flow systems.

  17. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  18. AN ERP STUDY OF REGULAR AND IRREGULAR ENGLISH PAST TENSE INFLECTION

    PubMed Central

    Newman, Aaron J.; Ullman, Michael T.; Pancheva, Roumyana; Waligura, Diane L.; Neville, Helen J.

    2006-01-01

    Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs (“regulars”) is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, “irregular” English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over right parietal sites (P600s), and which seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory. PMID:17070703

  19. Mainstreaming: Merging Regular and Special Education.

    ERIC Educational Resources Information Center

    Hasazi, Susan E.; And Others

    The booklet on mainstreaming looks at the merging of special and regular education as a process rather than as an end. Chapters address the following topics (sample subtopics in parentheses): what is mainstreaming; pros and cons of mainstreaming; forces influencing change in special education (educators, parents and advocacy groups, the courts,…

  20. Influence of platform and abutment angulation on peri-implant bone. A three-dimensional finite element stress analysis.

    PubMed

    Martini, Ana Paula; Barros, Rosália Moreira; Júnior, Amilcar Chagas Freitas; Rocha, Eduardo Passos; de Almeida, Erika Oliveira; Ferraz, Cacilda Cunha; Pellegrin, Maria Cristina Jimenez; Anchieta, Rodolfo Bruniera

    2013-12-01

    The aim of this study was to evaluate stress distribution on the peri-implant bone, simulating the influence of Nobel Select implants with straight or angulated abutments on regular and switching platform in the anterior maxilla, by means of 3-dimensional finite element analysis. Four mathematical models of a central incisor supported by external hexagon implant (13 mm × 5 mm) were created varying the platform (R, regular or S, switching) and the abutments (S, straight or A, angulated 15°). The models were created by using Mimics 13 and Solid Works 2010 software programs. The numerical analysis was performed using ANSYS Workbench 10.0. Oblique forces (100 N) were applied to the palatine surface of the central incisor. The bone/implant interface was considered perfectly integrated. Maximum (σmax) and minimum (σmin) principal stress values were obtained. For the cortical bone the highest stress values (σmax) were observed in the RA (regular platform and angulated abutment, 51 MPa), followed by SA (platform switching and angulated abutment, 44.8 MPa), RS (regular platform and straight abutment, 38.6 MPa) and SS (platform switching and straight abutment, 36.5 MPa). For the trabecular bone, the highest stress values (σmax) were observed in the RA (6.55 MPa), followed by RS (5.88 MPa), SA (5.60 MPa), and SS (4.82 MPa). The regular platform generated higher stress in the cervical periimplant region on the cortical and trabecular bone than the platform switching, irrespective of the abutment used (straight or angulated).

  1. Differential Activation of Fast-Spiking and Regular-Firing Neuron Populations During Movement and Reward in the Dorsal Medial Frontal Cortex

    PubMed Central

    Insel, Nathan; Barnes, Carol A.

    2015-01-01

    The medial prefrontal cortex is thought to be important for guiding behavior according to an animal's expectations. Efforts to decode the region have focused not only on the question of what information it computes, but also how distinct circuit components become engaged during behavior. We find that the activity of regular-firing, putative projection neurons contains rich information about behavioral context and firing fields cluster around reward sites, while activity among putative inhibitory and fast-spiking neurons is most associated with movement and accompanying sensory stimulation. These dissociations were observed even between adjacent neurons with apparently reciprocal, inhibitory–excitatory connections. A smaller population of projection neurons with burst-firing patterns did not show clustered firing fields around rewards; these neurons, although heterogeneous, were generally less selective for behavioral context than regular-firing cells. The data suggest a network that tracks an animal's behavioral situation while, at the same time, regulating excitation levels to emphasize high valued positions. In this scenario, the function of fast-spiking inhibitory neurons is to constrain network output relative to incoming sensory flow. This scheme could serve as a bridge between abstract sensorimotor information and single-dimensional codes for value, providing a neural framework to generate expectations from behavioral state. PMID:24700585

  2. Simulation of gas flow in micro-porous media with the regularized lattice Boltzmann method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Junjian; Kang, Qinjun; Wang, Yuzhu

    One primary challenge for prediction of gas flow in the unconventional gas reservoir at the pore-scale such as shale and tight gas reservoirs is the geometric complexity of the micro-porous media. In this paper, a regularized multiple-relaxation-time (MRT) lattice Boltzmann (LB) model is applied to analyze gas flow in 2-dimensional micro-porous medium reconstructed by quartet structure generation set (QSGS) on pore-scale. In this paper, the velocity distribution inside the porous structure is presented and analyzed, and the effects of the porosity and specific surface area on the rarefied gas flow and apparent permeability are examined and investigated. The simulation resultsmore » indicate that the gas exhibits different flow behaviours at various pressure conditions and the gas permeability is strongly related to the pressure. Finally, the increased porosity or the decreased specific surface area leads to the increase of the gas apparent permeability, and the gas flow is more sensitive to the pore morphological properties at low-pressure conditions.« less

  3. Simulation of gas flow in micro-porous media with the regularized lattice Boltzmann method

    DOE PAGES

    Wang, Junjian; Kang, Qinjun; Wang, Yuzhu; ...

    2017-06-01

    One primary challenge for prediction of gas flow in the unconventional gas reservoir at the pore-scale such as shale and tight gas reservoirs is the geometric complexity of the micro-porous media. In this paper, a regularized multiple-relaxation-time (MRT) lattice Boltzmann (LB) model is applied to analyze gas flow in 2-dimensional micro-porous medium reconstructed by quartet structure generation set (QSGS) on pore-scale. In this paper, the velocity distribution inside the porous structure is presented and analyzed, and the effects of the porosity and specific surface area on the rarefied gas flow and apparent permeability are examined and investigated. The simulation resultsmore » indicate that the gas exhibits different flow behaviours at various pressure conditions and the gas permeability is strongly related to the pressure. Finally, the increased porosity or the decreased specific surface area leads to the increase of the gas apparent permeability, and the gas flow is more sensitive to the pore morphological properties at low-pressure conditions.« less

  4. A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212

  5. Matching Extension in Regular Graphs

    DTIC Science & Technology

    1989-01-01

    Plummer, Matching Theory, Ann. Discrete Math . 29, North- Holland, Amsterdam, 1986. [101 , The matching structure of graphs: some recent re- sults...maximums d’un graphe, These, Dr. troisieme cycle, Univ. Grenoble, 1978. [12 ] D. Naddef and W.R. Pulleyblank, Matching in regular graphs, Discrete Math . 34...1981, 283-291. [13 1 M.D. Plummer, On n-extendable graphs, Discrete Math . 31, 1980, 201-210. . [ 141 ,Matching extension in planar graphs IV

  6. Regular and Special Educators: Handicap Integration Attitudes and Implications for Consultants.

    ERIC Educational Resources Information Center

    Gans, Karen D.

    1985-01-01

    One-hundred twenty-eight regular and 133 special educators responded to a questionnaire on mainstreaming. The two groups were similiar in their attitudes. Regular educators displayed more negative attitudes, but the differences rarely reached significance. Group differences became more apparent when attitudes concerning specific handicapping…

  7. Proper time regularization and the QCD chiral phase transition

    PubMed Central

    Cui, Zhu-Fang; Zhang, Jin-Li; Zong, Hong-Shi

    2017-01-01

    We study the QCD chiral phase transition at finite temperature and finite quark chemical potential within the two flavor Nambu–Jona-Lasinio (NJL) model, where a generalization of the proper-time regularization scheme is motivated and implemented. We find that in the chiral limit the whole transition line in the phase diagram is of second order, whereas for finite quark masses a crossover is observed. Moreover, if we take into account the influence of quark condensate to the coupling strength (which also provides a possible way of how the effective coupling varies with temperature and quark chemical potential), it is found that a CEP may appear. These findings differ substantially from other NJL results which use alternative regularization schemes, some explanation and discussion are given at the end. This indicates that the regularization scheme can have a dramatic impact on the study of the QCD phase transition within the NJL model. PMID:28401889

  8. A regularity result for fixed points, with applications to linear response

    NASA Astrophysics Data System (ADS)

    Sedro, Julien

    2018-04-01

    In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.

  9. Determinants of regular smoking onset in South Africa using duration analysis.

    PubMed

    Vellios, Nicole; van Walbeek, Corné

    2016-07-18

    South Africa has achieved significant success with its tobacco control policy. Between 1994 and 2012, the real price of cigarettes increased by 229%, while regular smoking prevalence decreased from about 31% to 18.2%. Cigarette prices and socioeconomic variables are used to examine the determinants of regular smoking onset. We apply duration analysis techniques to the National Income Dynamics Study, a nationally representative survey of South Africa. We find that an increase in cigarette prices significantly reduces regular smoking initiation among males, but not among females. Regular smoking among parents is positively correlated with smoking initiation among children. Children with more educated parents are less likely to initiate regular smoking than those with less educated parents. Africans initiate later and at lower rates than other race groups. As the tobacco epidemic is shifting towards low-income and middle-income countries, there is an increasing urgency to perform studies in these countries to influence policy. Higher cigarette excise taxes, which lead to higher retail prices, reduce smoking prevalence by encouraging smokers to quit and by discouraging young people from starting smoking. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  11. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  12. Real-world spatial regularities affect visual working memory for objects.

    PubMed

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V

    2015-12-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.

  13. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    PubMed

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  14. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  15. Regularization method for large eddy simulations of shock-turbulence interactions

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2018-05-01

    The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.

  16. Human action recognition with group lasso regularized-support vector machine

    NASA Astrophysics Data System (ADS)

    Luo, Huiwu; Lu, Huanzhang; Wu, Yabei; Zhao, Fei

    2016-05-01

    The bag-of-visual-words (BOVW) and Fisher kernel are two popular models in human action recognition, and support vector machine (SVM) is the most commonly used classifier for the two models. We show two kinds of group structures in the feature representation constructed by BOVW and Fisher kernel, respectively, since the structural information of feature representation can be seen as a prior for the classifier and can improve the performance of the classifier, which has been verified in several areas. However, the standard SVM employs L2-norm regularization in its learning procedure, which penalizes each variable individually and cannot express the structural information of feature representation. We replace the L2-norm regularization with group lasso regularization in standard SVM, and a group lasso regularized-support vector machine (GLRSVM) is proposed. Then, we embed the group structural information of feature representation into GLRSVM. Finally, we introduce an algorithm to solve the optimization problem of GLRSVM by alternating directions method of multipliers. The experiments evaluated on KTH, YouTube, and Hollywood2 datasets show that our method achieves promising results and improves the state-of-the-art methods on KTH and YouTube datasets.

  17. Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.

    PubMed

    Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A

    2013-02-01

    The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P < 0.05). The 3 preference segments were named: cluster 1 (C1) sweet taste and carbonation mouthfeel lovers, cluster 2 (C2) carbonation mouthfeel lovers, sweet and bitter taste acceptors, and cluster 3 (C3) bitter taste avoiders, mouthfeel and sweet taste lovers. User status (diet or regular beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food

  18. Platform switching: biomechanical evaluation using three-dimensional finite element analysis.

    PubMed

    Tabata, Lucas Fernando; Rocha, Eduardo Passos; Barão, Valentim Adelino Ricardo; Assunção, Wirley Goncalves

    2011-01-01

    The objective of this study was to evaluate, using three-dimensional finite element analysis (3D FEA), the stress distribution in peri-implant bone tissue, implants, and prosthetic components of implant-supported single crowns with the use of the platform-switching concept. Three 3D finite element models were created to replicate an external-hexagonal implant system with peri-implant bone tissue in which three different implant-abutment configurations were represented. In the regular platform (RP) group, a regular 4.1-mm-diameter abutment (UCLA) was connected to regular 4.1-mm-diameter implant. The platform-switching (PS) group was simulated by the connection of a wide implant (5.0 mm diameter) to a regular 4.1-mm-diameter UCLA abutment. In the wide-platform (WP) group, a 5.0-mm-diameter UCLA abutment was connected to a 5.0-mm-diameter implant. An occlusal load of 100 N was applied either axially or obliquely on the models using ANSYS software. Both the increase in implant diameter and the use of platform switching played roles in stress reduction. The PS group presented lower stress values than the RP and WP groups for bone and implant. In the peri-implant area, cortical bone exhibited a higher stress concentration than the trabecular bone in all models and both loading situations. Under oblique loading, higher intensity and greater distribution of stress were observed than under axial loading. Platform switching reduced von Mises (17.5% and 9.3% for axial and oblique loads, respectively), minimum (compressive) (19.4% for axial load and 21.9% for oblique load), and maximum (tensile) principal stress values (46.6% for axial load and 26.7% for oblique load) in the peri-implant bone tissue. Platform switching led to improved biomechanical stress distribution in peri-implant bone tissue. Oblique loads resulted in higher stress concentrations than axial loads for all models. Wide-diameter implants had a large influence in reducing stress values in the implant system.

  19. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... premium rate paid for certain hours worked by the employee in any day or workweek because such hours are hours worked in excess of eight in a day or in excess of the maximum workweek applicable to such... the basic, normal, or regular workday (not exceeding 8 hours) or workweek (not exceeding the maximum...

  20. Regularization and Approximation of a Class of Evolution Problems in Applied Mathematics

    DTIC Science & Technology

    1991-01-01

    8217 DT)IG AD-A242 223 FINAL REPORT Nov61991:ti -ll IN IImI 1OV1 Ml99 1 REGULARIZATION AND APPROXIMATION OF A-CLASS OF EVOLUTION -PROBLEMS IN APPLIED...The University of Texas at Austin Austin, TX 78712 91 10 30 050 FINAL REPORT "Regularization and Approximation of a Class of Evolution Problems in...micro-structured parabolic system. A mathematical analysis of the regularized equations-has been developed to support our approach. Supporting

  1. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  2. The Regularity of Optimal Irrigation Patterns

    NASA Astrophysics Data System (ADS)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 < α < 1. The aim of this paper is to prove the regularity of paths (rivers, branches,...) when the irrigated measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  3. Nonpolynomial Lagrangian approach to regular black holes

    NASA Astrophysics Data System (ADS)

    Colléaux, Aimeric; Chinaglia, Stefano; Zerbini, Sergio

    We present a review on Lagrangian models admitting spherically symmetric regular black holes (RBHs), and cosmological bounce solutions. Nonlinear electrodynamics, nonpolynomial gravity, and fluid approaches are explained in details. They consist respectively in a gauge invariant generalization of the Maxwell-Lagrangian, in modifications of the Einstein-Hilbert action via nonpolynomial curvature invariants, and finally in the reconstruction of density profiles able to cure the central singularity of black holes. The nonpolynomial gravity curvature invariants have the special property to be second-order and polynomial in the metric field, in spherically symmetric spacetimes. Along the way, other models and results are discussed, and some general properties that RBHs should satisfy are mentioned. A covariant Sakharov criterion for the absence of singularities in dynamical spherically symmetric spacetimes is also proposed and checked for some examples of such regular metric fields.

  4. Condom Use and Intimacy among Tajik Male Migrants and their Regular Female Partners in Moscow

    PubMed Central

    Polutnik, Chloe; Jonbekov, Jonbek; Shoakova, Farzona; Bahromov, Mahbat; Weine, Stevan

    2014-01-01

    This study examined condom use and intimacy among Tajik male migrants and their regular female partners in Moscow, Russia. This study included a survey of 400 Tajik male labour migrants; and longitudinal ethnographic interviews with 30 of the surveyed male migrants and 30 of their regular female partners. 351 (88%) of the surveyed male migrants reported having a regular female partner in Moscow. Findings demonstrated that the migrants’ and regular partners’ intentions to use condoms diminished with increased intimacy, yet each party perceived intimacy differently. Migrants’ intimacy with regular partners was determined by their familiarity and perceived sexual cleanliness of their partner. Migrants believed that Muslim women were cleaner than Orthodox Christian women and reported using condoms more frequently with Orthodox Christian regular partners. Regular partners reported determining intimacy based on the perceived commitment of the male migrant. When perceived commitment faced a crisis, intimacy declined, and regular partners renegotiated condom use. The association between intimacy and condom use suggests that HIV prevention programmes should aim to help male migrants and female regular partners to dissociate their approaches to condom use from their perceptions of intimacy. PMID:25033817

  5. Regularities in travel demand : an international perspective

    DOT National Transportation Integrated Search

    2000-12-01

    The major mobility variables from about 30 travel surveys in more than 10 countries are compared in this paper. The analysis of cross-sectional and longitudinal data broadly confirms some earlier findings of regularities in time and money expenditure...

  6. Magnetic behavior in heterometallic one-dimensional chains or octanuclear complex regularly aligned with metal-metal bonds as -Rh-Rh-Pt-Cu-Pt

    NASA Astrophysics Data System (ADS)

    Uemura, Kazuhiro

    2018-06-01

    Heterometallic one-dimensional chains, [{Rh2(O2CCH3)4}{Pt2Cu(piam)4(NH3)4}]n(PF6)2n (1 and 2, piam = pivalamidate) and [{Rh2(O2CCH3)4}{Pt2Cu(piam)4(NH3)4}2](CF3CO2)2(ClO4)2·2H2O (3), are paramagnetic one-dimensional chains or octanuclear complexes that are either aligned as -Rh-Rh-Pt-Cu-Pt- (1 and 2) or as Pt-Cu-Pt-Rh-Rh-Pt-Cu-Pt (3) with metal-metal bonds. Compounds 1-3 have rare structures, from the standpoint of that the paramagnetic species of Cu atoms are linked by direct metal-metal bonds. Magnetic susceptibility measurements for 1-3 performed at temperatures of 2 K-300 K indicated that the unpaired electrons localize in the Cu 3dx2-y2 orbitals, where S = 1/2 Cu(II) atoms are weakly antiferromagnetically coupled with J = -0.35 cm-1 (1), -0.47 cm-1 (2), and -0.45 cm-1 (3).

  7. Stem Access in Regular and Irregular Inflection: Evidence from German Participles

    ERIC Educational Resources Information Center

    Smolka, Eva; Zwitserlood, Pienie; Rosler, Frank

    2007-01-01

    This study investigated whether German participles are retrieved as whole words from lexical storage or whether they are accessed via their morphemic constituents. German participle formation is of particular interest, since it is concatenative for both regular and irregular verbs and results from combinations of regular/irregular stems with…

  8. Regularity of a renewal process estimated from binary data.

    PubMed

    Rice, John D; Strawderman, Robert L; Johnson, Brent A

    2017-10-09

    Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.

  9. Regular analgesic use and risk of multiple myeloma.

    PubMed

    Moysich, Kirsten B; Bonner, Mathew R; Beehler, Gregory P; Marshall, James R; Menezes, Ravi J; Baker, Julie A; Weiss, Joli R; Chanan-Khan, Asher

    2007-04-01

    Analgesic use has been implicated in the chemoprevention of a number of solid tumors, but to date no previous research has focused on the role of analgesics in the etiology of multiple myeloma (MM). We conducted a hospital-based case-control study of 117 patients with primary, incident MM and 483 age and residence matched controls without benign or malignant neoplasms. All participants received medical services at Roswell Park Cancer Institute in Buffalo, NY, and completed a comprehensive epidemiological questionnaire. Participants who reported analgesic use at least once a week for at least 6 months were classified as regular users; individuals who did not use analgesics regularly served as the reference group throughout the analyses. We used unconditional logistic regression analyses to compute crude and adjusted odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Compared to non-users, regular aspirin users were not at reduced risk of MM (adjusted OR=0.99; 95% CI 0.65-1.49), nor were participants with the highest frequency or duration of aspirin use. A significant risk elevation was found for participants who were regular acetaminophen users (adjusted OR=2.95; 95% CI 1.72-5.08). Further, marked increases in risk of MM were noted with both greater frequency (>7 tablets weekly; adjusted OR=4.36; 95% CI 1.70-11.2) and greater duration (>10 years; adjusted OR=3.26; 95% CI 1.52-7.02) of acetaminophen use. We observed no evidence of a chemoprotective effect of aspirin on MM risk, but observed significant risk elevations with various measures of acetaminophen use. Our results warrant further investigation in population-based case-control and cohort studies and should be interpreted with caution in light of the limited sample size and biases inherent in hospital-based studies.

  10. SparseBeads data: benchmarking sparsity-regularized computed tomography

    NASA Astrophysics Data System (ADS)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  11. Discharge regularity in the turtle posterior crista: comparisons between experiment and theory.

    PubMed

    Goldberg, Jay M; Holt, Joseph C

    2013-12-01

    Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively.

  12. Discharge regularity in the turtle posterior crista: comparisons between experiment and theory

    PubMed Central

    Holt, Joseph C.

    2013-01-01

    Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively. PMID:24004525

  13. Regular black holes: Electrically charged solutions, Reissner-Nordstroem outside a de Sitter core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemos, Jose P. S.; Zanchin, Vilson T.; Centro de Ciencias Naturais e Humanas, Universidade Federal do ABC, Rua Santa Adelia, 166, 09210-170, Santo Andre, Sao Paulo

    2011-06-15

    To have the correct picture of a black hole as a whole, it is of crucial importance to understand its interior. The singularities that lurk inside the horizon of the usual Kerr-Newman family of black hole solutions signal an endpoint to the physical laws and, as such, should be substituted in one way or another. A proposal that has been around for sometime is to replace the singular region of the spacetime by a region containing some form of matter or false vacuum configuration that can also cohabit with the black hole interior. Black holes without singularities are called regularmore » black holes. In the present work, regular black hole solutions are found within general relativity coupled to Maxwell's electromagnetism and charged matter. We show that there are objects which correspond to regular charged black holes, whose interior region is de Sitter, whose exterior region is Reissner-Nordstroem, and the boundary between both regions is made of an electrically charged spherically symmetric coat. There are several types of solutions: regular nonextremal black holes with a null matter boundary, regular nonextremal black holes with a timelike matter boundary, regular extremal black holes with a timelike matter boundary, and regular overcharged stars with a timelike matter boundary. The main physical and geometrical properties of such charged regular solutions are analyzed.« less

  14. Periodic measure for the stochastic equation of the barotropic viscous gas in a discretized one-dimensional domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benseghir, Rym, E-mail: benseghirrym@ymail.com, E-mail: benseghirrym@ymail.com; Benchettah, Azzedine, E-mail: abenchettah@hotmail.com; Raynaud de Fitte, Paul, E-mail: prf@univ-rouen.fr

    2015-11-30

    A stochastic equation system corresponding to the description of the motion of a barotropic viscous gas in a discretized one-dimensional domain with a weight regularizing the density is considered. In [2], the existence of an invariant measure was established for this discretized problem in the stationary case. In this paper, applying a slightly modified version of Khas’minskii’s theorem [5], we generalize this result in the periodic case by proving the existence of a periodic measure for this problem.

  15. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  16. Three-dimensional finite elements for the analysis of soil contamination using a multiple-porosity approach

    NASA Astrophysics Data System (ADS)

    El-Zein, Abbas; Carter, John P.; Airey, David W.

    2006-06-01

    A three-dimensional finite-element model of contaminant migration in fissured clays or contaminated sand which includes multiple sources of non-equilibrium processes is proposed. The conceptual framework can accommodate a regular network of fissures in 1D, 2D or 3D and immobile solutions in the macro-pores of aggregated topsoils, as well as non-equilibrium sorption. A Galerkin weighted-residual statement for the three-dimensional form of the equations in the Laplace domain is formulated. Equations are discretized using linear and quadratic prism elements. The system of algebraic equations is solved in the Laplace domain and solution is inverted to the time domain numerically. The model is validated and its scope is illustrated through the analysis of three problems: a waste repository deeply buried in fissured clay, a storage tank leaking into sand and a sanitary landfill leaching into fissured clay over a sand aquifer.

  17. Predictors of College Retention and Performance between Regular and Special Admissions

    ERIC Educational Resources Information Center

    Kim, Johyun

    2015-01-01

    This predictive correlational research study examined the effect of cognitive, demographic, and socioeconomic variables as predictors of regular and special admission students' first-year GPA and retention among a sample of 7,045 students. Findings indicated high school GPA and ACT scores were the two most effective predictors of regular and…

  18. Students with Chronic Conditions: Experiences and Challenges of Regular Education Teachers

    ERIC Educational Resources Information Center

    Selekman, Janice

    2017-01-01

    School nurses have observed the increasing prevalence of children with chronic conditions in the school setting; however, little is known about teacher experiences with these children in their regular classrooms. The purpose of this mixed-method study was to describe the experiences and challenges of regular education teachers when they have…

  19. Willingness of Regular and Special Educators to Teach Students with Handicaps.

    ERIC Educational Resources Information Center

    Gans, Karen Derk

    1987-01-01

    Regular educators (N=128) and special educators (N=133) in 21 Ohio school districts responded to a questionnaire regarding handicap integration. Willingness of regular educators to teach handicapped students depended more heavily on demographic variables (e.g., total number of years in teaching); willingness of special educators depended more on…

  20. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Study of the three-dimensional orientation of the labrum: its relations with the osseous acetabular rim

    PubMed Central

    Bonneau, Noémie; Bouhallier, July; Baylac, Michel; Tardieu, Christine; Gagey, Olivier

    2012-01-01

    Understanding the three-dimensional orientation of the coxo-femoral joint remains a challenge as an accurate three-dimensional orientation ensure an efficient bipedal gait and posture. The quantification of the orientation of the acetabulum can be performed using the three-dimensional axis perpendicular to the plane that passes along the edge of the acetabular rim. However, the acetabular rim is not regular as an important indentation in the anterior rim was observed. An innovative cadaver study of the labrum was developed to shed light on the proper quantification of the three-dimensional orientation of the acetabulum. Dissections on 17 non-embalmed corpses were performed. Our results suggest that the acetabular rim is better represented by an anterior plane and a posterior plane rather than a single plane along the entire rim as it is currently assumed. The development of the socket from the Y-shaped cartilage was suggested to explain the different orientations in these anterior and posterior planes. The labrum forms a plane that takes an orientation in between the anterior and posterior parts of the acetabular rim, filling up inequalities of the bony rim. The vectors VL, VA2 and VP, representing the three-dimensional orientation of the labrum, the anterior rim and the posterior rim, are situated in a unique plane that appears biomechanically dependent. The three-dimensional orientation of the acetabulum is a fundamental parameter to understand the hip joint mechanism. Important applications for hip surgery and rehabilitation, as well as for physical anthropology, were discussed. PMID:22360458

  2. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    PubMed Central

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  3. Early, regular breast-milk pumping may lead to early breast-milk feeding cessation.

    PubMed

    Yourkavitch, Jennifer; Rasmussen, Kathleen M; Pence, Brian W; Aiello, Allison; Ennett, Susan; Bengtson, Angela M; Chetwynd, Ellen; Robinson, Whitney

    2018-06-01

    To estimate the effect of early, regular breast-milk pumping on time to breast-milk feeding (BMF) and exclusive BMF cessation, for working and non-working women. Using the Infant Feeding Practices Survey II (IFPS II), we estimated weighted hazard ratios (HR) for the effect of regular pumping (participant defined) compared with non-regular/not pumping, reported at month 2, on both time to BMF cessation (to 12 months) and time to exclusive BMF cessation (to 6 months), using inverse probability weights to control confounding. USA, 2005-2007. BMF (n 1624) and exclusively BMF (n 971) IFPS II participants at month 2. The weighted HR for time to BMF cessation was 1·62 (95 % CI 1·47, 1·78) and for time to exclusive BMF cessation was 1·14 (95 % CI 1·03, 1·25). Among non-working women, the weighted HR for time to BMF cessation was 2·05 (95 % CI 1·84, 2·28) and for time to exclusive BMF cessation was 1·10 (95 % CI 0·98, 1·22). Among working women, the weighted HR for time to BMF cessation was 0·90 (95 % CI 0·75, 1·07) and for time to exclusive BMF cessation was 1·14 (95 % CI 0·96, 1·36). Overall, regular pumpers were more likely to stop BMF and exclusive BMF than non-regular/non-pumpers. Non-working regular pumpers were more likely than non-regular/non-pumpers to stop BMF. There was no effect among working women. Early, regular pumpers may need specialized support to maintain BMF.

  4. Regular source of primary care and emergency department use of children in Victoria.

    PubMed

    Turbitt, Erin; Freed, Gary Lee

    2016-03-01

    The aim of this paper was to study the prevalence of a regular source of primary care for Victorian children attending one of four emergency departments (EDs) and to determine associated characteristics, including ED use. Responses were collected via an electronic survey from parents attending EDs with their child (≤9 years of age) for a lower-urgency condition. Single, multiple choice, and Likert scale responses were analysed using bivariate and logistic regression tests. Of the 1146 parents who provided responses, 80% stated their child has a regular source of primary care. Of these, care is mostly received by a general practitioner (GP) (95%) in GP group practices (71%). Approximately 20% have changed where their child receives primary care in the last year. No associations were observed between having a regular source of primary care and frequency of ED attendance in the past 12 months, although parents whose child did not have a regular source of primary care were more likely to view the ED as a more convenient place to receive care than the primary care provider (39% without regular source vs. 18% with regular source; P < 0.0001). Children were less likely to have a regular source of primary care if their parents were younger, had a lower household income, lower education, and were visiting a hospital in a lower socio-economic indexes for areas rank. Policy options to improve continuity of care for children may require investigation. Increasing the prevalence of regular source of primary care for children may in turn reduce ED visits. © 2015 The Authors. Journal of Paediatrics and Child Health © 2015 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  5. Modular invariant representations of infinite-dimensional Lie algebras and superalgebras

    PubMed Central

    Kac, Victor G.; Wakimoto, Minoru

    1988-01-01

    In this paper, we launch a program to describe and classify modular invariant representations of infinite-dimensional Lie algebras and superalgebras. We prove a character formula for a large class of highest weight representations L(λ) of a Kac-Moody algebra [unk] with a symmetrizable Cartan matrix, generalizing the Weyl-Kac character formula [Kac, V. G. (1974) Funct. Anal. Appl. 8, 68-70]. In the case of an affine [unk], this class includes modular invariant representations of arbitrary rational level m = t/u, where t [unk] Z and u [unk] N are relatively prime and m + g ≥ g/u (g is the dual Coxeter number). We write the characters of these representations in terms of theta functions and calculate their asymptotics, generalizing the results of Kac and Peterson [Kac, V. G. & Peterson, D. H. (1984) Adv. Math. 53, 125-264] and of Kac and Wakimoto [Kac, V. G. & Wakimoto, M. (1988) Adv. Math. 70, 156-234] for the u = 1 (integrable) case. We work out in detail the case [unk] = A1(1), in particular classifying all its modular invariant representations. Furthermore, we show that the modular invariant representations of the Virasoro algebra Vir are precisely the “minimal series” of Belavin et al. [Belavin, A. A., Polyakov, A. M. & Zamolodchikov, A. B. (1984) Nucl. Phys. B 241, 333-380] using the character formulas of Feigin and Fuchs [Feigin, B. L. & Fuchs, D. B. (1984) Lect. Notes Math. 1060, 230-245]. We show that tensoring the basic representation and modular invariant representations of A1(1) produces all modular invariant representations of Vir generalizing the results of Goddard et al. [Goddard P., Kent, A. & Olive, D. (1986) Commun. Math. Phys. 103, 105-119] and of Kac and Wakimoto [Kac, V. G. & Wakimoto, M. (1986) Lect. Notes Phys. 261, 345-371] in the unitary case. We study the general branching functions as well. All these results are generalized to the Kac-Moody superalgebras introduced by Kac [Kac, V. G. (1978) Adv. Math. 30, 85-136] and to N = 1 super

  6. Three-Dimensional Integrated Survey for Building Investigations.

    PubMed

    Costantino, Domenica; Angelini, Maria Giuseppa

    2015-11-01

    The study shows the results of a survey aimed to represent a building collapse and the feasibility of the modellation as a support of structure analysis. An integrated survey using topographic, photogrammetric, and terrestrial laser techniques was carried out to obtain a three-dimensional (3D) model of the building, plans and prospects, and the particulars of the collapsed area. Authors acquired, by a photogrammetric survey, information about regular parties of the structure; while using laser scanner data they reconstructed a set of more interesting architectural details and areas with higher surface curvature. Specifically, the process of texture provided a detailed 3D structure of the areas under investigation. The analysis of the data acquired resulted to be very useful both in identifying the causes of the disaster and also in helping the reconstruction of the collapsed corner showing the contribution that the integrated surveys can give in preserving architectural and historic heritage. © 2015 American Academy of Forensic Sciences.

  7. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  8. Regularization techniques for backward--in--time evolutionary PDE problems

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonathan; Protas, Bartosz

    2007-11-01

    Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.

  9. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  10. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    PubMed

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  11. Correlates of regular exercise during pregnancy: the Norwegian Mother and Child Cohort Study.

    PubMed

    Owe, K M; Nystad, W; Bø, K

    2009-10-01

    The aims of this study were to describe the level of exercise during pregnancy and to assess factors associated with regular exercise. Using data from the Norwegian Mother and Child Cohort Study conducted by the Norwegian Institute of Public Health, 34 508 pregnancies were included in the present study. Data were collected by self-completed questionnaires in gestational weeks 17 and 30, and analyzed by logistic regression analysis. The results are presented as adjusted odds ratios (aOR) with a 95% confidence interval. The proportion of women exercising regularly was 46.4% before pregnancy and decreased to 28.0 and 20.4% in weeks 17 and 30, respectively. Walking and bicycling were the most frequently reported activities before and during pregnancy. The prevalence of swimming tended to increase from prepregnancy to week 30. Exercising regularly prepregnancy was highly related to regular exercise in week 17, aOR=18.4 (17.1-19.7) and 30, aOR 4.3 (4.1-4.6). Low gestational weight gain was positively associated with regular exercise in week 30, aOR=1.2 (1.1-1.4), whereas being overweight before pregnancy was inversely associated with regular exercise in week 17, aOR=0.8 (0.7-0.8) and 30, aOR=0.7 (0.6-0.7). Also, women experiencing a multiple pregnancy, pelvic girdle pain, or nausea were less likely to exercise regularly.

  12. Verbal Working Memory Is Related to the Acquisition of Cross-Linguistic Phonological Regularities.

    PubMed

    Bosma, Evelyn; Heeringa, Wilbert; Hoekstra, Eric; Versloot, Arjen; Blom, Elma

    2017-01-01

    Closely related languages share cross-linguistic phonological regularities, such as Frisian -âld [ͻ:t] and Dutch -oud [ʱut], as in the cognate pairs kâld [kͻ:t] - koud [kʱut] 'cold' and wâld [wͻ:t] - woud [wʱut] 'forest'. Within Bybee's (1995, 2001, 2008, 2010) network model, these regularities are, just like grammatical rules within a language, generalizations that emerge from schemas of phonologically and semantically related words. Previous research has shown that verbal working memory is related to the acquisition of grammar, but not vocabulary. This suggests that verbal working memory supports the acquisition of linguistic regularities. In order to test this hypothesis we investigated whether verbal working memory is also related to the acquisition of cross-linguistic phonological regularities. For three consecutive years, 5- to 8-year-old Frisian-Dutch bilingual children ( n = 120) were tested annually on verbal working memory and a Frisian receptive vocabulary task that comprised four cognate categories: (1) identical cognates, (2) non-identical cognates that either do or (3) do not exhibit a phonological regularity between Frisian and Dutch, and (4) non-cognates. The results showed that verbal working memory had a significantly stronger effect on cognate category (2) than on the other three cognate categories. This suggests that verbal working memory is related to the acquisition of cross-linguistic phonological regularities. More generally, it confirms the hypothesis that verbal working memory plays a role in the acquisition of linguistic regularities.

  13. Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Peng, Lihui; Xiao, Deyun

    2007-06-01

    This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.

  14. The association between regular marijuana use and adult mental health outcomes.

    PubMed

    Guttmannova, Katarina; Kosterman, Rick; White, Helene R; Bailey, Jennifer A; Lee, Jungeun Olivia; Epstein, Marina; Jones, Tiffany M; Hawkins, J David

    2017-10-01

    The present study is a prospective examination of the relationship between regular marijuana use from adolescence through young adulthood and mental health outcomes at age 33. Data came from a gender-balanced, ethnically diverse longitudinal panel of 808 participants from Seattle, Washington. Outcomes included symptom counts for six mental health disorders. Regular marijuana use was tracked during adolescence and young adulthood. Regression analyses controlled for demographics and early environment, behaviors, and individual risk factors. Nonusers of marijuana reported fewer symptoms of alcohol use disorder, nicotine dependence, and generalized anxiety disorder than any category of marijuana users. More persistent regular marijuana use in young adulthood was positively related to more symptoms of cannabis use disorder, alcohol use disorder, and nicotine dependence at age 33. Findings highlight the importance of avoiding regular marijuana use, especially chronic use in young adulthood. Comprehensive prevention and intervention efforts focusing on marijuana and other substance use might be particularly important in the context of recent legalization of recreational marijuana use in Washington and other U.S. states. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization

    PubMed Central

    Brier, Matthew R.; Mitra, Anish; McCarthy, John E.; Ances, Beau M.; Snyder, Abraham Z.

    2015-01-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. PMID:26208872

  17. Global Regularity of 2D Density Patches for Inhomogeneous Navier-Stokes

    NASA Astrophysics Data System (ADS)

    Gancedo, Francisco; García-Juárez, Eduardo

    2018-07-01

    This paper is about Lions' open problem on density patches (Lions in Mathematical topics in fluid mechanics. Vol. 1, volume 3 of Oxford Lecture series in mathematics and its applications, Clarendon Press, Oxford University Press, New York, 1996): whether or not inhomogeneous incompressible Navier-Stokes equations preserve the initial regularity of the free boundary given by density patches. Using classical Sobolev spaces for the velocity, we first establish the propagation of {C^{1+γ}} regularity with {0 < γ < 1} in the case of positive density. Furthermore, we go beyond this to show the persistence of a geometrical quantity such as the curvature. In addition, we obtain a proof for {C^{2+γ}} regularity.

  18. Regular Gleason Measures and Generalized Effect Algebras

    NASA Astrophysics Data System (ADS)

    Dvurečenskij, Anatolij; Janda, Jiří

    2015-12-01

    We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.

  19. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  20. High-resolution seismic data regularization and wavefield separation

    NASA Astrophysics Data System (ADS)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.